# Octopus Deploy Documentation - Full Export > All eligible documentation pages as plain markdown, ordered by navigation section then page order. # Administration Source: https://octopus.com/docs/administration.md > Octopus administration tasks and command reference This section provides information that is important for managing your Octopus Server. # Scoping Annotations Source: https://octopus.com/docs/argo-cd/annotations.md For an Octopus deployment to update the desired Argo CD Application Source, the relationship between an Argo CD Application Source and a Project, Environment and/or a Tenant must be defined. By setting up these relationships, you answer the question: > When I deploy `Project-X` to the `Staging` environment - which Argo CD Application Source(s) should be updated? This is done by adding "Scoping" annotations to the Argo CD Application definition, either through the Argo CD Web UI, or directly in the Argo CD Application resource manifest (YAML). The three scoping annotations are (where `` is the name of the source to be updated): | Annotation | Required | Value description | | ---------------------------------------------- | -------- | --------------------------------------------- | | `argo.octopus.com/project[.]` | true | This is the *slug* of the Octopus Project | | `argo.octopus.com/environment[.]` | true | This is the *slug* of the Octopus Environment | | `argo.octopus.com/tenant[.]` | false | This is the *slug* of the Octopus Tenant | ## Single source If the Argo CD Application contains a single source, the `name` property is optional. If the source is not named, the annotations must be unscoped. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: argo.octopus.com/environment: development argo.octopus.com/project: argo-cd-guestbook spec: source: repoURL: https://github.com/example-org/guestbook.git targetRevision: HEAD path: ./ ``` If the source is named, then the annotations must also be source-scoped. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: argo.octopus.com/environment.guestbook-source: development argo.octopus.com/project.guestbook-source: argo-cd-guestbook spec: source: repoURL: https://github.com/example-org/guestbook.git targetRevision: HEAD path: ./ name: guestbook-source ``` ## Multiple sources If there are multiple sources, the sources being updated must be named and the annotations must also be source-scoped. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: argo.octopus.com/environment.guestbook-service-1: development argo.octopus.com/project.guestbook-service-1: argo-cd-guestbook-service-1 argo.octopus.com/environment.guestbook-service-2: development argo.octopus.com/project.guestbook-service-2: argo-cd-guestbook-service-2 spec: sources: - repoURL: https://github.com/example-org/guestbook-service-1.git targetRevision: HEAD path: ./ name: guestbook-service-1 - repoURL: https://github.com/example-org/guestbook-service-2.git targetRevision: HEAD path: ./ name: guestbook-service-2 ``` ## Updating in Argo CD Web UI You can update the annotations for an Argo CD Application via the Argo CD Web UI. 1. Navigate to the Web UI 2. Navigate to the application page of the target application 3. Click the **Details** button, the details drawer should slide out. 4. On the **Summary** tab in the drawer, click the **Edit** button in the top section 5. You can add new annotations by pressing the **+** button in the Annotations section 6. Click **Save** :::figure ![Argo CD Application Edit](/docs/img/argo-cd/argo-cd-app-annotation-edit.png) ::: ## Updating the Argo CD Application resource manifest If you are managing your Argo CD Application manifests in YAML files, you can add the annotations directly into the `metadata.annotations` node. Example: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: argo.octopus.com/environment: development argo.octopus.com/project: argo-cd-guestbook spec: ... ``` ### Generating the YAML annotations To help generate the correct annotations, the Octopus UI provides a form that lets you select projects, environments, and/or tenants, and the correct scoping annotations will be generated for you. To find this form go to: 1. Navigate to **Infrastructure ➜ Argo CD Instances**, then click the name of the relevant Argo CD instance 2. On the Argo CD instance Settings page, click the **Generate Scoping Annotations** button 3. In the drawer, you can select a **Project**, **Environment** and optionally a **Tenant**. The annotation YAML will be generated and can be copied directly into the manifest. :::figure ![Generate Scoping Annotations drawer](/docs/img/argo-cd/generate-scoping-annotations-drawer.png) ::: # Applied Manifests Source: https://octopus.com/docs/kubernetes/deployment-verification/applied-manifests.md As part of your deployment, Octopus also captures the Kubernetes manifests that were applied to the cluster. This allows you to validate and verify the exact manifests that were applied, allowing for easier debugging of any issues. Octopus will show a list of all the applied manifests on a deployment screen — the `Applied Manifests` view on the `KUBERNETES` tab. :::figure ![A screenshot of the Kubernetes Applied Manifests tab](/docs/img/deployments/kubernetes/deployment-verification/applied-manifest-page.png) ::: ## Where it is available Applied manifests are available for these steps. * Deploy Kubernetes YAML * Deploy a Helm Chart * Deploy with Kustomize * Configure and apply Kubernetes resources (except for the Blue/Green deployment strategy) * Configure and apply a Kubernetes ConfigMap * Configure and apply a Kubernetes Secret * Configure and apply a Kubernetes Ingress * Configure and apply a Kubernetes Service ## How it works When Octopus performs a Kubernetes deployment step, the individual manifests are captured, encrypted and stored on the filesystem. If the input manifest contained multiple resources in a single file, these are broken out into individual manifests for viewing. If the deployment step is a Helm deployment, Octopus retrieves the templated manifests generated by Helm and displays the result. ## How to use For each step that performs a Kubernetes deployment, there is a navigation tree with all the manifests listed under their deployment target and in their deployed namespace. For non-namespaced resources, they are listed under the `Cluster-scoped` node. :::figure ![A screenshot of the Kubernetes Applied Manifests navigation tree](/docs/img/deployments/kubernetes/deployment-verification/navigation-tree.png) ::: If Step Verification is enabled, an icon indicating the health of the resource is shown on the resource. See [here](/docs/kubernetes/deployment-verification) for more information. On the right-hand side, the individual manifests are displayed in collapsible sections. :::figure ![A screenshot of the Kubernetes Applied Manifests manifests list](/docs/img/deployments/kubernetes/deployment-verification/manifests.png) ::: ## Kubernetes Secret resources and Octopus sensitive variables To protect your Octopus sensitive values, Octopus will obfuscate these values if they are substituted into when displaying the manifest. For Kubernetes Secrets, we obfuscate _all_ values regardless of whether they came from Octopus sensitive variables or other sources. :::figure ![A screenshot of an obfuscated Kubernetes Secret manifests ](/docs/img/deployments/kubernetes/deployment-verification/secret.png) ::: :::div{.warning} It is highly recommended when using variable substitution to add Octopus sensitive variables to your manifests that you store these variables in Kubernetes `Secret` resources. You can correctly base64 format the sensitive variable using the following syntax: `#{ MySensitiveValue | ToBase64 }` ::: # Approvals Source: https://octopus.com/docs/approvals.md > Defining your change approval process Managing deployment pipelines at scale is complex and time-consuming for DevOps teams, and it's more complicated when you add in change management. Manually filling out change requests takes time and is prone to error. It's also common to have strict change processes needing thorough reviews to get approval to ship new releases of applications. Change advisory boards can be perceived as roadblocks that slow development teams. When CI/CD systems create change requests automatically, you can work towards best practices with less friction. We want to make change management easier by helping you integrate Octopus with IT service management (ITSM) systems to reduce friction and simplify your development teams' lives. Octopus Deploy includes ITSM integrations for ServiceNow and Jira Service Management that let you balance audit and compliance requirements with team productivity. :::div{.hint} Octopus Approvals is a built-in approval system that works without an external ITSM tool. This feature is currently in Alpha, available to a small set of customers. If you are interested in this feature please register your interest on the [roadmap card](https://roadmap.octopus.com/c/243-approvals-for-deployments) and we'll keep you updated. ::: Our support focuses on: 1. **Productive teams** - Automatically create change requests and associate them with Octopus deployments or runbook runs so you can work with the right stakeholders to ensure your changes are compliant and approved. Octopus can also prevent deployments and runbook runs from executing until all approvals are complete. 2. **Compliant DevOps** - Be sure that no one is deploying unapproved changes to production. Your audits become a smooth process as you can demonstrate your company's processes are being adhered to with system reports. ## Built-in change management with Octopus Approvals Octopus Approvals is a built-in change approval system that lets you gate deployments and runbook runs on approvals from designated users or teams - no external ITSM tool required. When a controlled deployment is created, Octopus automatically creates a change request (formatted as `OCT-{number}`) and pauses execution. Once the minimum number of approvals is reached, Octopus allows the task to proceed. If any approver rejects the request, Octopus terminates the task immediately. What's included in Octopus Approvals? - Octopus manages the full approval workflow with no external dependencies. - Approval policies define which users or teams can approve and how many approvals Octopus requires before proceeding. - Octopus creates change requests automatically at deployment time. - Octopus supports change windows — the task waits until an approved time period before Octopus allows execution. - Octopus records an audit trail of approvals and rejections in the task log. :::div{.hint} Octopus Feature Toggles are currently in Alpha, available to a small set of customers. If you are interested in this feature please register your interest on the [roadmap card](https://roadmap.octopus.com/c/121-feature-toggles) and we'll keep you updated. ::: Learn more about [Octopus Approvals](/docs/approvals/octopus-approvals). ## ServiceNow change management without friction :::figure ![ServiceNow deployment waiting for approval](/docs/img/approvals/servicenow-task-status-with-cr.png) ::: This new integration links Octopus deployments and runbook runs to ServiceNow change requests and automatically creates pre-populated, normal change requests. You get improved traceability out-of-the-box, and you can prove to auditors that every controlled deployment and runbook has a change request. This ensures your CI/CD and release management processes are compliant with company policies and regulations. What's included in our ServiceNow support? - Easy workflow configuration, so it's straightforward to integrate Octopus with ServiceNow. - Link a deployment or runbook run to an existing change request to manually associate deployments with change requests. - Automatically create normal and emergency change requests at execution time. Octopus pauses the execution until the appropriate approvals are complete. - Let Octopus do the work for you by automating the transition between stages in the change request once created, leaving a deployment or runbook run record in ServiceNow. - Use change templates to auto-create standard change requests to reduce manual work and control what information is populated. - Ensure "Change Windows" are honored on existing change requests so deployments or runbook runs won't execute until the specified time window. - Add work notes to change requests with information about deployment or runbook run start and finish time and whether it was successful or not. - Create change requests with pre-populated fields through variables. - View and export audit logs of controlled deployments and runbook runs for easy compliance and post-execution reconciliation. Learn more about our [ServiceNow integration](/docs/approvals/servicenow). :::div{.hint} ServiceNow integration is available to customers with an [enterprise subscription](https://octopus.com/pricing). ::: ## Efficient change management approvals with Jira Service Management :::figure ![Jira Service Management approvals configuration](/docs/img/approvals/jira-task-settings.png) ::: To build on our ITSM change management support further, we are also pleased to announce our Jira Service Management integration. The Jira Service Management integration ensures that teams using this platform can access the benefits of creating change requests automatically in Octopus. It makes it easier to manage deployment pipelines at scale, reducing the complexity of change management. Integrating Octopus with Jira Service Management reduces the need for manually filling out change requests, making it faster and less prone to error. By using Octopus to create change requests automatically, you can create best practice change management easily. This new integration links Octopus deployments and runbook runs to Jira Service Management change requests and automatically creates pre-populated "Request for change" change requests. You get improved traceability out-of-the-box, and you can prove to auditors that every controlled deployment and runbook has a change request. This ensures your CI/CD and release management processes are compliant with company policies and regulations. What's included in our Jira Service Management support? - Easy workflow configuration, so it's straightforward to integrate Octopus with Jira Service Management - Link a deployment or runbook run to an existing change request, to manually associate deployments and runbook runs with change requests - Automatically create "Request for change" requests at execution time. Octopus pauses the execution until the appropriate approvals are complete - View and export audit logs of controlled deployments and runbook runs for easy compliance and post-execution reconciliation If your team uses Jira Service Management change management, we'd love for you to try it and provide your feedback. Register for the [Jira Service Management EAP](https://octopusdeploy.typeform.com/jsm-eap). # Argo CD deployments with Octopus Source: https://octopus.com/docs/argo-cd.md Octopus makes it easy to improve your Argo CD deployments with environment modeling and deployment orchestration. Automate everything from environment promotion and compliance to tests and change management. :::figure ![Argo in Octopus Overview](/docs/img/argo-cd/argo-cd-overview.png) ::: Argo CD excels at synchronizing manifests to clusters and provides a powerful UI to verify and troubleshoot deployments. However, it treats each of your applications as independent entities, meaning there's no *codified* relationship between staging and production installations of your applications. Because of this, you need to manage this staging/production relationship and promotion between them through external mechanisms, e.g.: - Manual file manipulations - Custom scripts, run automatically or via Jenkins/CI tooling The Octopus/Argo integration means your Argo Applications can be updated and deployed via an Octopus Deployment Process (or runbook), which in turn means your Applications can be safely promoted through a controlled lifecycle. Octopus makes integrating and deploying with Argo CD simple: 1. Creating a connection to Argo CD instances and cross mapping Argo CD Applications to Octopus Projects 2. Deployment steps which can update the Git repositories backing the mapped Argo CD Applications 3. Dashboards and live status displays, showing the result of deployment, and the status of the deployed applications and resources This section expands each of these areas, while also providing useful resources and tutorials to get you up and running with Argo CD in Octopus faster. # Troubleshooting Argo CD in Octopus Source: https://octopus.com/docs/argo-cd/troubleshooting.md Minor issues in your configuration can prevent Octopus from integrating effectively with Argo CD. The most common issues encountered while setting up Argo CD integration are listed below along with the steps to be followed to reach a resolution. ## Gateway Installation ### Argo CD Gateway install dialog stuck in progressing Behavior: - Helm install dialog stuck in progressing (Waiting for to establish a connection) - Helm command halted showing chart pulled for >= 5 minutes - In a Kubernetes viewer (e.g. K9s), the gateway pod logs state "Failed to register ArgoCD Gateway with Octopus Server" Cause: - Gateway cannot reach Octopus Server at the url specified in `registration.octopus.serverApiUrl` using the token specified in `registration.octopus.serverAccessToken` - Url may be incorrect, or not reachable from within your cluster - Note: local clusters require specialist hostnames to reach your host computer (eg. host.minikube.internal) - Token may be expired Resolution: - Confirm serverUrl is set correctly, and is resolvable/reachable from inside your cluster - Re-execute the installation process, ensuring to complete within lifetime of supplied bearer token ### Argo CD Gateway install fails initial health check Behavior: - Install Argo CD Gateway dialog states: - "established a connection" was successful - Health check failed - The Gateway pod is in a CrashLoopBackoff - In a Kubernetes viewer (e.g. K9s), the gateway pod logs state "*error validating connection to Argo CD*" - In Octopus, the healthcheck task log contains: "The Argo CD Gateway has not established a gRPC connection to Octopus Server" Cause: - Unable to create a connection to your Argo CD instance Resolution: - Confirm the URL specified for the `gateway.argocd.serverGrpcUrl` matches the expected grpc endpoint of your argo instance (`..svc.cluster.local`) - If your Argo CD instance is using a self-signed certificate ensure `gateway.argocd.insecure` is set to `true` - If your Argo CD instance is running in "insecure" mode, ensure `gateway.argocd.plaintext` is set to `true` (false otherwise) - In Octopus, delete the registered Argo CD Gateway, follow all required helm deletion commands, and reinstall ## Application/Project mapping ### No applications are listed on the **Argo CD Instance ➜ Applications** page Behavior: - Argo CD web UI shows existing applications, however they do not appear in the Octopus UI Cause: - Argo CD Token used by the gateway has insufficient permissions to access applications resources in Argo Resolution: - Create required RBAC entries for the account being used by the Octopus Gateway as per [Argo CD Authentication](/docs/argo-cd/instances/argo-user). ## Step Configuration ### Argo Applications in step shows "You don't have any Argo CD instance to preview yet" Behavior: - The "Argo Applications" section on the step indicates that no Argo CD instances exist Cause: - No Argo CD instances are registered in the current space Resolution: - Navigate to **Infrastructure ➜ Argo CD Instances** and confirm an instance is visible in this space - If not - add a new Argo CD instance using the installation wizard ## Step Execution ### No Applications are updated during a deployment Behavior: - Deployment passes with warnings - Octopus deployment task log contains `No annotated Argo CD applications could be found for this deployment.` Cause: - No applications have been annotated for this project/environment/tenant deployment Resolution: - Confirm/update annotations on the target Argo Application match the deployments Project/Environment/Tenant and update as appropriate ### Deployment fails on an Argo CD step (no git credentials) Behavior: - Octopus deployment task log contains "Could not find a Git Credential associated with " Cause: - One of the Argo CD Application Sources to be updated references a git repository for which Octopus has no Git Credential Resolution: - Add/Update a Git Credential to Octopus, specifying a repository-allowlist which includes the url specified in the error message. ### Deployment fails on Argo CD step (source is not a git repository) Behavior: - Octopus deployment task log contains `Failed to clone Git repository at ` Cause: - The mapped Argo Application source is not a git repository (e.g. Helm repository or OCI) - The provided git credentials for the url have insufficient privileges Resolution: - Octopus cannot update charts sourced from a Helm repository or OCI feed - contact support to determine way forward. - Ensure the associated git credential has appropriate permissions ### Deployment fails on Argo CD step (insufficient permissions) Behavior: - Octopus deployment task log contains "http status code: 403" Cause: - Octopus Git credential associated with mapped Argo CD Application Source has insufficient privileges to read/write the git repository Resolution: - Create a new credential in your git provider, store it in an Octopus Git Credential, and ensure the "Allow List" includes your Application Source repository ## Argo CD live view not visible on dashboard Behavior: - Live Status Panel is not visible on the project/space dashboard Cause: - Live Status is not enabled Resolution: - Enable Live Status via the "Live Status" toggle switch at the top of the dashboard. # Overview Source: https://octopus.com/docs/argo-cd/instances.md An Argo CD instance is represented in Octopus as a separate Infrastructure component, distinct from Deployment Targets or Workers. Each Argo CD instance in Octopus represents a connection to a running Argo CD instance and is used as the anchor in which Argo CD applications are retrieved. To connect Octopus Deploy to an Argo CD instance, a network gateway must first be installed in your Kubernetes cluster, the Octopus Argo CD Gateway. The gateway creates a TLS-encrypted, outgoing gRPC connection from the host Kubernetes cluster to Octopus Server, and routes data from Argo CD to your Octopus Server instance. This gateway means that no publicly accessible HTTP/gRPC URL is required for communication, providing added security. A gateway is required for each Argo CD Instance being connected to Octopus. :::div{.info} The Argo/Octopus integration requires only the Octopus/Argo gateway to support all features. The [Kubernetes agent](/docs/kubernetes/targets/kubernetes-agent) and the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) are not required when monitoring Argo CD applications. The Octopus Argo CD Gateway has the equivalent capabilities. ::: ## Installing the Octopus Argo CD Gateway The gateway is installed using [Helm](https://helm.sh) via the [octopusdeploy/octopus-argocd-gateway-chart](https://hub.docker.com/r/octopusdeploy/octopus-argocd-gateway-chart) chart. To simplify this, there is an installation wizard in Octopus to generate the required values. :::div{.warning} Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands. You can see the current kubectl config by executing: ```bash kubectl config view ``` ::: ### Configuration 1. Navigate to **Infrastructure ➜ Argo CD Instances**, and click **Add Argo CD Instance** 2. This launches the Register Argo CD Instance dialog :::figure ![Octopus-Argo-Gateway Wizard Config Page](/docs/img/argo-cd/gateway-wizard-config.png) ::: 1. Enter the unique name for the instance. This name is used to generate the Kubernetes namespace, as well as the Helm release name. :::div{.warning} The gateway's name must be unique within a cluster. Otherwise the existing gateway's settings will be overwritten, causing deployment failures. ::: 2. Select at least one [environment](/docs/infrastructure/environments) the instance will be responsible for servicing. 3. If required, change the [in-cluster](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services) URL of the Argo CD API Server service. In many cases the default value provided will work. 4. Optionally, add the URL used to access Argo CD's web frontend. This will be used for linking from Octopus to Argo CD to aid with deployment investigations. 5. A valid Argo CD JWT authentication token is required. To generate this, you can use the [Argo CD CLI](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_account_generate-token/). :::div{.warning} The JWT token must belong to an Argo CD user with permission to read Application and Cluster resources. ::: 6. Press **Next** to move to the next screen #### Installation helm command At the end of the wizard, Octopus generates a Helm command that you copy and paste into a terminal connected to the target cluster. After it's executed, Helm installs all the required resources and starts the gateway. :::div{.hint} Full documentation for the Octopus Argo CD gateway Helm chart values can be found in this [GitHub repository](https://github.com/OctopusDeploy/octopus-argocd-gateway-chart-docs/tree/main). ::: :::figure ![Octopus Argo CD Gateway Wizard Helm command Page](/docs/img/argo-cd/gateway-wizard-helm-comand.png) ::: :::div{.hint} The helm command includes a one-hour bearer token that is used when the gateway first initializes, to register itself with Octopus Server. ::: :::div{.hint} The terminal Kubernetes context must have enough permissions to create namespaces and install resources into that namespace. If you wish to install the gateway into an existing namespace, remove the `--create-namespace` flag and change the value after `--namespace` ::: :::div{.warning} By default, the Octopus Argo CD gateway will verify TLS certificates before making a connection, if your Argo CD instance is hosted with a self-signed TLS certificate or isn't using a TLS certificate at all the gateway will fail to connect, this can be prevented by setting one of the following value on the Helm install. ```bash # For self-signed certificates - Disables TLS certificate verification gateway.argocd.insecure="true" # For no certificates - Disables TLS entirely, all traffic between the gateway and Argo traffic will be unencrypted gateway.argocd.plaintext="true" ``` ::: If left open, the installation dialog waits for the gateway to establish a connection and run a health check. Once successful, the Octopus Argo CD gateway is ready for use. :::figure ![Octopus Argo CD Gateway Wizard successful installation](/docs/img/argo-cd/gateway-wizard-success.png) ::: :::div{.hint} A successful health check indicates that the gateway can successfully connect to the target Argo CD instance and is communicating with Octopus Server. ::: ### Advanced Configuration #### Trusting Certificates If your Octopus server or Argo CD instance are hosted using self signed certificates, the gateway will likely not be able to connect. To get the gateway application to trust your certificates you can provide them in two ways (requires `>= v1.12.0` of the Octopus Argo CD gateway Helm chart): **Passing certificates as Helm values:** ```bash helm upgrade --atomic \ --version "1.0.0" \ --namespace "{{GATEWAY_NAMESPACE}}" \ --reset-then-reuse-values \ --set registration.octopus.serverCertificate="{{BASE64_ENCODED_CERTIFICATE}}" \ --set gateway.octopus.serverCertificate="{{BASE64_ENCODED_CERTIFICATE}}" \ --set gateway.argocd.serverCertificate="{{BASE64_ENCODED_CERTIFICATE}}" \ --set gateway.octopus.serverThumbprint="" \ --set gateway.argocd.insecure="false" \ --set gateway.argocd.plaintext="false" \ {{EXISTING_HELM_RELEASE_NAME}} \ oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart ``` `registration.octopus.serverCertificate` - Refers to the certificate that Octopus is hosting its web portal and http API with. This certificate is verified during the automatic registration process. `gateway.octopus.serverCertificate` - Refers to the certificate that Octopus is hosting its gRPC web host with. `gateway.argocd.serverCertificate` - Refers to the certificate that Argo CD server is using to host its web portal and API. [Read more about configuring TLS in Argo CD here](https://argo-cd.readthedocs.io/en/stable/operator-manual/tls/#configuring-tls-for-argocd-server) These certificates are stored in a secret within your Kubernetes cluster `octopus-argocd-gateway-certificates` and are loaded into the gateway's trust store during startup. **Passing certificates via existing Kubernetes secret:** Certificates can be loaded by the gateway using an existing Kubernetes secret that exists in the same namespace as the gateway. ```yaml apiVersion: v1 kind: Secret metadata: name: your-certs namespace: octo-argo-gateway data: octopus-server-grpc-certificate.pem: {{BASE64_ENCODED_CERTIFICATE}} octopus-server-http-certificate.pem: {{BASE64_ENCODED_CERTIFICATE}} argo-cd-server-certificate.pem: {{BASE64_ENCODED_CERTIFICATE}} other-certificate-to-trust.pem: {{BASE64_ENCODED_CERTIFICATE}} type: Opaque ``` ```bash helm upgrade --atomic \ --version "1.0.0" \ --namespace "{{GATEWAY_NAMESPACE}}" \ --reset-then-reuse-values \ --set gateway.serverCertificateSecretName="YOUR_KUBERNETES_SECRET_NAME" \ {{EXISTING_HELM_RELEASE_NAME}} \ oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart ``` All certificates included in the secret will be loaded into the trust store during the gateway's startup. If any of the keys in your secret are duplicates of the keys in `octopus-argocd-gateway-certificates` the values in your secret will take priority e.g. If you set `octopus-server-grpc-certificate.pem` in your own secret but then also pass a value to `gateway.octopus.serverCertificate` the certificate in your secret will overwrite the one in the helm values. ### Health Checks and Updating Octopus performs health checks on the gateway in a manner similar to that used for workers and deployment targets. The gateway uses an internal cronjob to ensure it is always running the latest version. ## Status Display Your connected 'Argo CD Instances' appear under the Octopus's Infrastructure menu. These pages let you: - View and edit the Octopus-managed properties of your Argo CD Instance (e.g. permitted environments), - View known Argo Applications, and how they map to Octopus project/environment/tenants - View connectivity and health related data of the instance and gateway ## Next steps After the gateway has been configured, you need to define the relationships between Argo CD Applications and Octopus Projects, Environments and/or Tenants. See [Scoping Annotations](/docs/argo-cd/annotations) for more information. ## Versioning The Octopus Argo CD gateway Helm chart follows [Semantic Versioning](https://semver.org/). Generally, version updates can be interpreted as follows: - *major* - Breaking changes to the chart. This may include adding or removing of resources, breaking changes in the Octopus Argo CD gateway application image, breaking changes to the structure of the `values.yaml`. Upgrading to a major version might involve modifying your gateway's configuration or upgrading to a version of Octopus that supports the major version - *minor* - New non-breaking features. New features or improvements to the Octopus Argo CD gateway application or helm chart itself. - *patch* - Minor non-breaking bug fixes or changes that do not introduce new features. ## Troubleshooting ### Argo CD TLS Errors If your gateway is unable to connect to your Argo CD instance due to TLS errors it is likely due to the certificate that Argo CD is serving traffic with. #### Self Signed Certificate If you are getting an error that looks like this: ```text tls: failed to verify certificate: x509: certificate signed by unknown authority ``` It is most likely due to Argo CD using a self-signed certificate, if it is intended that your certificate is self-signed you can disable certificate verification by doing the following: Using Helm for existing installation: ```bash helm upgrade --atomic \ --version "1.0.0" \ --namespace "{{GATEWAY_NAMESPACE}}" \ --reset-then-reuse-values \ --set gateway.argocd.insecure="true" \ --set gateway.argocd.plaintext="false" \ {{EXISTING_HELM_RELEASE_NAME}} \ oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart ``` :::div{.warning} By setting `gateway.argocd.insecure="true"`, TLS certificate verification will no longer be performed between the gateway and the Argo CD instance. Make sure this configuration is necessary to avoid potential security issues. ::: #### No Certificate If you are running your Argo CD instance without a certificate due to terminating SSL at a load balancer level the gateway will likely fail to connect with the following error: ```text transport: authentication handshake failed: EOF ``` This is because the gateway is configured by default to require encrypted traffic, if it is intended that you don't have a certificate you can disable encryption between the gateway and Argo CD by doing the following: ```bash helm upgrade --atomic \ --version "1.0.0" \ --namespace "{{GATEWAY_NAMESPACE}}" \ --reset-then-reuse-values \ --set gateway.argocd.insecure="false" \ --set gateway.argocd.plaintext="true" \ {{EXISTING_HELM_RELEASE_NAME}} \ oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart ``` :::div{.warning} By setting `gateway.argocd.plaintext="true"`, all traffic between the gateway and Argo CD will be unencrypted. Make sure this configuration is necessary to avoid potential security issues. ::: ## Deleting an Octopus Argo CD Gateway When removing a gateway two operations are required: 1. Deregister the gateway from Octopus Server 2. Remove the application from your cluster The Octopus UI allows you to perform both of these operations. Navigate to **Infrastructure ➜ Argo CD Instances**, and select the instance whose gateway is to be removed. From the overflow menu, select "Delete" which will display a confirmation dialog containing the Helm command which when executed will remove the gateway from your cluster. # Best Practices Source: https://octopus.com/docs/best-practices.md This section will provide a set of best practices and implementation guides to use with your Octopus Deploy instance. Like any best practices guide, it won't cover 100% of all scenarios; adapt and modify them to match your company's requirements. It is also a living document; it will change as new features are added to Octopus Deploy and we help more customers. ## Whitepapers For long-form best practices and recommendations please refer to our whitepapers. - [Best Practices for Self-Hosted Octopus Deploy HA/DR](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr) ## Deployments The Deployments section covers our recommendations and implementation guides for configuring your deployment (and runbook) processes. They are meant for platform engineers, DevOps engineers, or anyone looking to create golden paths. The topics covered are: - [Lifecycles and Environments](/docs/best-practices/deployments/lifecycles-and-environments) - [Deployment and Runbook Process](/docs/best-practices/deployments/deployment-and-runbook-processes) - [Variables](/docs/best-practices/deployments/variables) - [Step Templates and Script Modules](/docs/best-practices/deployments/step-templates-and-script-modules) - [Notifications](/docs/best-practices/deployments/notifications) - [Releases and Deployments](/docs/best-practices/deployments/releases-and-deployments) ## Octopus Administration The Octopus Administration section covers our recommendations and implementation guides for anyone responsible and accountable for administrating Octopus Deploy for their organization. The topics covered are: - [Environments, deployment targets, and target tags](/docs/best-practices/octopus-administration/environments-and-deployment-targets-and-roles) - [Projects and Project group structure](/docs/best-practices/octopus-administration/projects-and-project-groups) - [Users, roles, and teams](/docs/best-practices/octopus-administration/users-roles-and-teams) - [Partition Octopus with Spaces](/docs/best-practices/octopus-administration/partition-octopus-with-spaces) - [Offload work onto Workers](/docs/best-practices/octopus-administration/worker-configuration) - [Ongoing maintenance](/docs/best-practices/octopus-administration/ongoing-maintenance) ## Self-Hosted Octopus Deploy The self-hosted Octopus Deploy section covers our recommendations and implementation guides for our customers who wish to self-host Octopus Deploy on their infrastructure. The topics covered are: - [Installation Guidelines](/docs/best-practices/self-hosted-octopus/installation-guidelines) - [High Availability](/docs/best-practices/self-hosted-octopus/high-availability) # Octopus AI Assistant Cookbook Source: https://octopus.com/docs/octopus-ai/assistant/cookbook.md This cookbook includes ready-to-use prompts that help automate and optimize your work with Octopus Deploy. ## Available Recipes - [Analyze step template usage](/docs/octopus-ai/assistant/cookbook/analyze-step-template-usage) - [Audit environment naming and counts](/docs/octopus-ai/assistant/cookbook/audit-environment-naming-and-counts) - [Audit PCI deployments](/docs/octopus-ai/assistant/cookbook/audit-pci-deployments) - [Audit target role distribution](/docs/octopus-ai/assistant/cookbook/audit-target-role-distribution) - [Check retention policy consistency](/docs/octopus-ai/assistant/cookbook/check-retention-policy-consistency) - [Create a .NET Azure App deployment process](/docs/octopus-ai/assistant/cookbook/create-a-net-azure-app-deployment-process) - [Detect overlapping variable names](/docs/octopus-ai/assistant/cookbook/detect-overlapping-variable-names) - [Detect unused variables](/docs/octopus-ai/assistant/cookbook/detect-unused-variables) - [Evaluate deployment frequency](/docs/octopus-ai/assistant/cookbook/evaluate-deployment-frequency) - [Fix variable binding errors](/docs/octopus-ai/assistant/cookbook/fix-variable-binding-errors) - [Generate deployment rollback plan](/docs/octopus-ai/assistant/cookbook/generate-deployment-rollback-plan) - [Improve multi-tenant deployments](/docs/octopus-ai/assistant/cookbook/improve-multi-tenant-deployments) - [Investigate production deployment failure](/docs/octopus-ai/assistant/cookbook/investigate-production-deployment-failure) - [Kubernetes deployment pipeline](/docs/octopus-ai/assistant/cookbook/kubernetes-deployment-pipeline) - [List failed deployments](/docs/octopus-ai/assistant/cookbook/list-failed-deployments) - [Recommend variable scoping](/docs/octopus-ai/assistant/cookbook/recommend-variable-scoping) - [Report runbook scheduling](/docs/octopus-ai/assistant/cookbook/report-runbook-scheduling) - [Report skipped steps](/docs/octopus-ai/assistant/cookbook/report-skipped-steps) - [Resolve rolling deployment timeouts](/docs/octopus-ai/assistant/cookbook/resolve-rolling-deployment-timeouts) - [Restart Windows services Runbook](/docs/octopus-ai/assistant/cookbook/restart-windows-services-runbook) - [Review runbook usage](/docs/octopus-ai/assistant/cookbook/review-runbook-usage) - [Security best practices check](/docs/octopus-ai/assistant/cookbook/security-best-practices-check) - [Speed up Lifecycle phases](/docs/octopus-ai/assistant/cookbook/speed-up-lifecycle-phases) - [Summarize Ops Runbooks](/docs/octopus-ai/assistant/cookbook/summarize-ops-runbooks) - [Summarize tag set coverage](/docs/octopus-ai/assistant/cookbook/summarize-tag-set-coverage) - [Summarize worker pool health](/docs/octopus-ai/assistant/cookbook/summarize-worker-pool-health) # Octopus database Source: https://octopus.com/docs/administration/data.md Octopus Deploy uses a Microsoft SQL Server database to store environments, projects, variables, releases, and deployment history. Some data is also [stored on the file system](/docs/administration/managing-infrastructure/server-configuration-and-file-storage). ## Install Octopus Server {#installing} Octopus Server requires access to a SQL Server to store relational data. You can create the database ahead of time or let the installer create it for you. Refer to [SQL Server Database requirements](/docs/installation/sql-server-database) for more information on the SQL Server editions supported by Octopus Deploy and installation instructions. ## Routine maintenance {#maintenance} You are responsible for the routine maintenance of your Octopus database. Performance problems with your SQL Server will make Octopus run and feel slow and sluggish. You should implement a routine maintenance plan for your Octopus database. Here is a [sure guide](https://oc.to/SQLServerMaintenanceGuide) (free e-book) for maintaining SQL Server. Our [Performance](/docs/administration/managing-infrastructure/performance/#sql-maintenance) section has some general recommendations that may help get you started. ### Database backups {#backups} You are responsible for taking database backups and testing your disaster recovery plans. Refer to [Backup and restore](/docs/administration/data/backup-and-restore) for more information about backing up Octopus Deploy and recovering from failure. ### High availability databases {#high-availability} If you are looking for a highly available database solution, we recommend using [Always On Availability Groups](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-2017). Octopus Server does not support running against a SQL database with Database Mirroring or SQL Replication enabled. ## Schema {#schema} Octopus should be given its own database, which must not be shared with any other applications. Octopus Server maintains its own schema and will create the initial database schema upon installation. It will update the schema when you upgrade Octopus Server. The System Integrity Check at **Configuration ➜ Diagnostics** will let you know if the database schema has drifted from its intended state. :::figure ![](/docs/img/administration/data/run-system-integrity-check.png) ::: ### Modifying the schema {#modifying-the-schema} Customizing the Octopus database may cause problems when upgrading Octopus Server and make your installation difficult to support. There are certain scenarios where you can modify the schema safely (indexes, statistics), and other scenarios which will cause Octopus Server to fail (tables, views, stored procedures, functions). ### Index recommendations Each installation of Octopus Deploy will have different data and usage patterns. Some of our customers have huge environments and a few projects, others have many projects deploying to small environments. Some customers may create hundreds of releases each day, while others deploy releases every few days. As a result, the Database Engine Tuning Advisor, or hosted offerings like Azure SQL Database, may suggest performance optimizations like creating additional indexes. **Feel free to create database indexes that suit your scenario, but please understand the impact of modifying the schema.** When you upgrade Octopus Server, we make certain assumptions about the database schema, and the presence (or absence) of indexes may cause your upgrade to fail. The upgrade process will warn you if it finds any custom indexes. If you feel an index would benefit everyone using Octopus, please contact our [support team](https://octopus.com/support) so we can consider making it part of the standard database schema. :::div{.hint} **Azure SQL Database automatic index management** To ensure that you are aware of which indexes exist, we suggest turning off the Azure SQL feature to [automatically apply performance recommendations](https://learn.microsoft.com/en-us/azure/azure-sql/database/database-advisor-implement-performance-recommendations?view=azuresql) and apply the recommendations manually instead. ::: # Overview Source: https://octopus.com/docs/argo-cd/steps.md ## Steps Octopus offers two in-built steps which are able to modify a mapped Argo CD application in different ways: 1. Update Argo CD Application Image Tags 2. Update Argo CD Application Manifests ### Update Argo CD Application Image Tags This step is responsible for identifying images referenced by an application, and updating their image-tags to the value defined in the Octopus release. This step lets you update the versions of your applications via Octopus, while maintaining the definition of the bulk of your manifests in your Git repository. You can read more about the [update application image tags step](/docs/argo-cd/steps/update-application-image-tags). ### Update Argo CD Application Manifests This step is responsible for populating an Argo CD Application's Git repository with content generated by populating user-provided templates with [Octopus Variables](/docs/projects/variables/getting-started). You can read more about the [update application manifests step](/docs/argo-cd/steps/update-application-manifests). ## Setting up Git Credentials Octopus needs to clone and push changes to the Git repositories backing your Argo CD applications. Unlike other Octopus features where you select a Git credential directly, the Argo CD steps automatically determine which credential to use by matching the repository URL from your Argo CD application against the [repository restrictions](/docs/infrastructure/git-credentials#repository-restrictions) configured on your Git credentials. Make sure your Git credentials have repository restrictions that include the URLs of the repositories used by your Argo CD applications. ## Common Configuration Some configuration is shared between both of the steps mentioned above, these fields are explained below with the configuration specific to each step on their respective pages. ### Argo CD Applications [Argo CD Applications](/docs/argo-cd/steps/argo-cd-applications-view) is an aid to help determine which instances, and which applications are going to be updated when executing this step. ### Git Commit Settings The following options provide controls over how we apply the change via Git. #### Commit Message Commit message lets you specify the summary, and description of the change. The description will be automatically populated if left empty. The content here will be reused for pull request messages if you have selected for the change to merge via pull request. :::div{.warning} If the commit summary or description references a [Sensitive Variable](/docs/projects/variables/sensitive-variables) the deployment will fail. This ensures sensitive data is not leaked to Git via the commit/PR message. ::: #### Git Commit Method Git commit method specifies *how* changes are applied to your target branch. Either directly committed, or merged via a pull request. If you decide to open pull requests for your changes, you may choose to do so for all environments or only a specific selection. Environments not selected will commit directly to the target branch. :::div{.warning} Currently, pull requests can only be created for GitHub, GitLab, and Azure DevOps based repositories. Please [let us know](https://oc.to/roadmap-argo-cd) which other providers you would like to see supported. ::: #### Step Verification ![Step verification options in the process editor](/docs/img/argo-cd/steps/argocd-step-verification.png) :::div{.info} Step verification is available from 2026.1. ::: There are 3 options for how Octopus will determine your step successfully resolved. ##### Direct commit or pull request created Octopus will ensure that the changes were successfully committed to Git, but perform no further checks. ##### Pull request merged Octopus will pause the deployment task until all created pull requests are merged. For any environments that do not create pull requests, this will not wait. Octopus will review this status once every 60 seconds. While the task is paused, this deployment will not count towards your task cap. :::div{.info} Pull request merged verification is available from 2026.2. ::: ##### Argo CD Application is healthy Octopus will wait until your Argo CD instance reports that all of the applications are in sync with the created commit and/or pull request, and also reports that all resources are created and healthy. You can check the current status of your deployment by looking at the Live Object Status page. If the health status is `Healthy` and the sync status is `In Sync`, then the deployment will resume after the next evaluation. Octopus will review this status once every 30 seconds. While the task is paused, this deployment will not count towards your task cap. ###### Step Timeout (Optional) When set, this will cause the deployment step to fail if Argo does not report a healthy and in sync status before the timeout period. ### Trigger Sync Enabling this option will cause Argo CD to explicitly sync applications with the changes committed to Git by this same step. This can be done for all environments, or just those specifically selected. This option is intended to be as close to clicking the sync button in the Argo CD UI as the API makes possible. #### Sync Policy The application sync will be applied using the pre-configured sync options/sync policy for your application. Information about Argo CD's sync policies can be found in the [official documentation](https://oc.to/argocd-sync-policy-docs). ## Output Variables When these steps are executed, they each create a number of [output variables](/docs/projects/variables/output-variables) which contain information relating to the step's execution, these variables include: | Variable name | Content | | ------------------ | --------------------------------------------------------------------------------------- | | PullRequest.Title | The title of the PR created in the step, empty if no PR was created. | | PullRequest.Number | The unique identifier of the PR within your git repository, empty if no PR was created. | | PullRequest.Url | The Url of the PR created, empty if no PR was created. | These variables will be available for subsequent steps in your deployment process. # Deployment verification in Octopus Source: https://octopus.com/docs/kubernetes/deployment-verification.md Octopus can leverage information from a Kubernetes cluster to make step execution status more informative. With this feature enabled, Octopus will compare deployed resources' status with the desired state (applied configuration). The step will only be completed if the actual state meets the desired state. In other cases, the step will fail. Octopus will also show a snapshot (from the moment of deployment) of deployed object status on a deployment screen — the `Object Snapshot` view on the `KUBERNETES` tab. :::figure ![A screenshot of the Object Snapshot tab](/docs/img/deployments/kubernetes/object-status/kubernetes-tab-object-snapshot.png) ::: ## Where it is available Step Verification is available for these steps. * Deploy Kubernetes YAML * Deploy a Helm Chart * Deploy with Kustomize * Configure and apply Kubernetes resources (except for the Blue/Green deployment strategy) * Configure and apply a Kubernetes ConfigMap * Configure and apply a Kubernetes Secret * Configure and apply a Kubernetes Ingress * Configure and apply a Kubernetes Service Object status is disabled for all steps added before the feature was introduced and enabled by default in all the new steps added later. ## How to configure ### Helm Use the `Step Verification` section on the step configuration page. :::figure ![A screenshot of the Helm Step Verification configuration section](/docs/img/deployments/kubernetes/object-status/helm-step-verification.png) ::: Enabling the option will add the Helm [`--wait`](https://helm.sh/docs/helm/helm_upgrade/#options) parameter to the upgrade command. Additionally, when enabled, you can also enable the use of the Helm parameter [`--wait-for-jobs`](https://helm.sh/docs/helm/helm_upgrade/#options) to wait for jobs to run before determining step success. ### Other steps Use the `Step Verification` section on the step configuration page. :::figure ![A screenshot of the Step Verification configuration section](/docs/img/deployments/kubernetes/object-status/step-verification-configuration.png) ::: Use the first option to enable the feature (`Verify that Kubernetes objects reached their desired state`). Choosing `Don't do any verification checks` will disable the feature. One can configure two extra parameters: * **Step timeout** refers to the maximum time a deployment step can run before termination (determined in seconds). This setting is intended to prevent a step from running indefinitely or causing delays in the overall deployment process. If one disables the parameter (checkbox), you allow the step to run indefinitely. * **Wait for Jobs to complete during deployment** determines if Octopus should wait for the successful completion of the jobs deployed at this step. If unchecked, Octopus considers a step execution successful once Jobs are created without waiting for their execution. A user needs to create and deploy a new release after one saves the new configuration to see the changes. ## How it works ### Helm Helm has an existing mechanism for tracking the status of deployed resources, the [`--wait`](https://helm.sh/docs/helm/helm_upgrade/#options) parameter on the upgrade command. Rather than Octopus providing another mechanism on top of this, Octopus adds this parameter to the upgrade parameters and relies on Helm to track the deployed resources and fail the step if the resource fails ### Other steps When a deployment to a Kubernetes cluster is created, Octopus identifies the objects to create or update during this deployment. It then checks the status of these objects continuously throughout the deployment process. Apart from the objects that are defined directly in the project, Octopus also grabs the status of any children objects of them. For example, ReplicaSets and Pods that belong to a Deployment are included along with the Deployment itself, despite they are not defined directly. The step will succeed as soon as Kubernetes achieves the desired state. When the step timeout has been set, the step will fail if Kubernetes doesn't achieve the desired state within the timeout. The only exception to this rule is for a stand-alone pod (without a ReplicaSet about it) or a job. The step will fail early if these resources achieve an unrecoverable state. :::figure ![A K8s object status diagram](/docs/img/deployments/kubernetes/object-status/K8s-object-status-logics.jpg) ::: ## How to use Octopus will change the meaning of step execution status after enabling Step Verification; no additional actions are required. One can interpret the new step status as that Octopus ensured the desired configuration was achieved on the target cluster and was stable for a given number of seconds (Status stabilization timeout value). Users can also observe live updates from the cluster on the Kubernetes tab (Deployment page). :::figure ![A screenshot of the Object Status tab](/docs/img/deployments/kubernetes/object-status/kubernetes-tab.png) ::: Octopus displays resource status in a respected table for each deployed resource. The table is live during the step execution (till the end of the stabilization period). After that, the table will not get any updates and will remain a snapshot for future reference. At a given point in time, an object can have one of four statuses: | Label | Status Icon | |:----------------------------|:----------------------------------------:| | In progress | | | Success | | | Error | | | Timed out while in progress | | If there are multiple steps in deploying Kubernetes resources, each step will have a separate section on the tab. ### Resource manifests The manifest that deployed a particular resource can be viewed if the name of the resource is a link. Resources that are created by other resources , such as `ReplicaSets` or `Pods` from `Deployments` will not have a viewable manifest. :::figure ![A screenshot of the Object Status resource name link](/docs/img/deployments/kubernetes/object-status/resource-drawer-link.png) ::: Clicking the name will open a drawer showing the manifest for the resource. :::figure ![A screenshot of the Object Status resource drawer](/docs/img/deployments/kubernetes/object-status/resource-drawer.png) ::: ## Useful links * [Find more details in the blog post](https://octopus.com/blog/live-updates-kubernetes-objects-deployments) # Deployments Source: https://octopus.com/docs/best-practices/deployments.md This section covers our recommendations and implementation guides for configuring your deployment (and runbook) processes. They are meant for platform engineers, DevOps engineers, or anyone looking to create golden paths. The topics covered are: - [Environments, deployment targets, and target tags](/docs/best-practices/deployments/environments-and-deployment-targets-and-roles) - [Lifecycles and Environments](/docs/best-practices/deployments/lifecycles-and-environments) - [Projects and Project group structure](/docs/best-practices/deployments/projects-and-project-groups) - [Deployment and Runbook Process](/docs/best-practices/deployments/deployment-and-runbook-processes) - [Variables](/docs/best-practices/deployments/variables) - [Step Templates and Script Modules](/docs/best-practices/deployments/step-templates-and-script-modules) - [Notifications](/docs/best-practices/deployments/notifications) - [Releases and Deployments](/docs/best-practices/deployments/releases-and-deployments) # Deployments Source: https://octopus.com/docs/deployments.md > Let's deploy! Great deployments are stress-free deployments. They should be non-events that ‘just work’ without the need for sign-off from multiple gatekeepers. You can use Octopus to deploy anything, anywhere. Whether it’s [Kubernetes](/docs/deployments/kubernetes), [Linux](/docs/infrastructure/deployment-targets/linux/), or [Windows](/docs/deployments/windows) virtual machines, [Amazon Web Services](/docs/deployments/aws), [Azure](/docs/deployments/azure), or [Google Cloud](/docs/deployments/google-cloud), if Octopus can speak to it via our Tentacle agent, SSH, command-line, or a web service, Octopus can deploy to it. ## Why deploy with Octopus? Octopus models deployments in advanced ways, letting you tame complexity at scale. Octopus is about more than just automating your existing deployment process. It has features that streamline the complex aspects of deployments, where manual and scripted deployments often fail. Deploying software with Octopus involves [packaging your applications](/docs/packaging-applications/) and [configuring your infrastructure](/docs/infrastructure/). With those two completed, you’ll use the following features to complete your deployment setup in Octopus. ### Deployment process The Octopus [deployment process](/docs/projects/deployment-process/) lets you define all the tasks needed to deploy your software. The process acts like a deployment checklist, making sure each task gets completed before proceeding to the next one. Critically, unlike a manual checklist, Octopus never forgets to run a step and always completes them in the correct order. :::figure ![Octopus deployment process](/docs/img/shared-content/concepts/images/deployment-process.png) ::: ### Steps Octopus provides a library of [pre-built step templates](/docs/projects/steps) for deployment tasks, offering a simpler configuration experience by separating scripts and API calls. These steps can be easily added to processes, with configurations allowing for conditions based on environments, channels, previous steps, or package acquisition. Options include running steps in parallel or sequence, setting time limits, and configuring retries for failed steps. :::figure ![Octopus process step templates](/docs/img/deployments/octopus-step-templates.png) ::: There's also a [community library](https://library.octopus.com/) with over 500 step templates, so you can complete tasks without coding. You can still create custom script steps or develop your own custom step templates for use across projects. #### Guided failure mode and step retries You can configure deployments to fail in Octopus when there’s an error. Guided failure mode and step retries are alternative options to outright failing the deployment. [Guided failure](/docs/releases/guided-failures) mode pauses the deployment when a step fails and lets you retry the failed step (after fixing any errors if needed), skip it, or fail the deployment. When switched on, step retries re-attempts to run the failed step 3 times before it gives up and fails the deployment. This is useful when there's a temporary network issue and the destination is temporarily unavailable. ### Variables Octopus [variables](/docs/projects/variables/) let you apply the correct variables easily during a deployment. Octopus can manage simple values, secrets, and accounts as variables. :::figure ![Octopus variables](/docs/img/shared-content/concepts/images/variables.png) ::: You can scope variables by: - Environments - Deployment target tags - Deployment targets - Deployment processes and steps - Channels ### Releases A [release](/docs/releases/) in Octopus is a snapshot of the deployment process and assets at creation, ensuring consistency in deployments. Changes to assets don't affect existing releases. Tenant variables get excluded, so it's easier to onboard new tenants without needing a new release. ## Getting started with deployments Use the navigation to discover deployment examples for different types of applications and technologies using Octopus. It also includes [common deployment patterns and practices](/docs/deployments/patterns). # Ephemeral Environments Source: https://octopus.com/docs/infrastructure/ephemeral-environments.md Ephemeral environments in Octopus Deploy allow you to automatically create test environments on-demand to gain confidence in your changes while helping to keep your infrastructure costs down. Ephemeral environments are designed to be created and removed as part of testing changes within the development lifecycle. [Releases](/docs/releases) can be deployed to in the same way as long-lived environments such as **Staging** or **Production**, and provide additional capabilities to provision and deprovision infrastructure associated with the environment using [Runbooks](/docs/runbooks). ## Getting started Ephemeral environments are configured within Projects, see the [Getting Started](/docs/projects/ephemeral-environments) guide. ## Scoping variables, deployment targets and accounts Ephemeral environments will be created and removed regularly as part of testing changes. To avoid requiring ongoing configuration of variables, deployment targets and accounts, ephemeral environments are represented by a **Parent Environment**. Parent environments are configured alongside existing long-lived environments in the Octopus Web Portal but have key differences: - Parent environments cannot be used in lifecycles. - Parent environments cannot be deployed to. Parent environments can be selected alongside existing long-lived environments in the following areas of Octopus: - Deployment targets - Accounts - Certificates - Variable sets - Project variables - User roles assigned to teams ## Availability Ephemeral environments are available to all cloud and self-hosted customers from version `2025.4`. # Ephemeral Environments Source: https://octopus.com/docs/projects/ephemeral-environments.md Ephemeral environments in Octopus Deploy allow you to automatically create test environments on-demand to gain confidence in your changes while helping to keep your infrastructure costs down. Ephemeral environments integrate smoothly into your existing development workflows by building on existing Octopus features such as [Releases](/docs/releases), [Channels](/docs/releases/channels) and [Runbooks](/docs/runbooks). Octopus can automatically create and deploy to an ephemeral environment from releases created within a specifically configured channel in a project, and supports provisioning and deprovisioning of associated infrastructure using Runbooks. ## Getting started To configure Ephemeral Environments for your project: - Select **Deploy** from the main navigation in the Octopus Web Portal and select your project. - Select the **Ephemeral Environments** navigation menu in the sidebar. - Click the **Configure Ephemeral Environments** button. - Follow the configuration wizard to enable the feature for your project. ![Getting started with ephemeral environments from within a project](/docs/projects/ephemeral-environments/getting-started.png) ### Parent environment A parent environment provides [scoping of variables, deployment targets and accounts for ephemeral environments](/docs/infrastructure/ephemeral-environments#scoping-variables-deployment-targets-and-accounts). Parent environments are not included in Lifecycles and cannot be deployed to. Enter the name for a new parent environment or select an existing parent environment then click Next. :::div{.hint} **Tip:** Give your parent environment a recognizable name that describes what you intend to use ephemeral environments for. Examples might include "Pull request environments" or "Test environments". ::: ### Auto Deploy You can choose to automatically deploy releases to ephemeral environments when they created. This can help to streamline your workflows by reducing the number of manual steps required to get your changes deployed. When auto deployment is configured, Octopus will automatically create a new ephemeral environment for you from releases in your project. The name of each environment can be configured using an Environment Name Template. Templates support the same powerful syntax as Variables. Any [system variable for a release](/docs/projects/variables/system-variables#release) can be used as part of the template. When auto deployment is not configured, you need to manually create ephemeral environments and deploy releases to them. You can do this using the Octopus Portal, API and CLI. You can also select to manually create and deploy to an environment. Select whether to automatically deploy releases to ephemeral environments. If you select to automatically deploy, you will also have to provide an environment name template that will be used to name the environment. #### Environment Name Template :::div{.hint} **Tips:** - Environment names only support a specific set of characters, Octopus will automatically replace the following invalid characters with a `-`: `< > : " / \ | ? * { }` - Environment names can have spaces in them. Leading and trailing dashes and underscores will be removed from your environment name. - Environment names have a limit of 50 characters, you can use [Variable filters](/docs/projects/variables/variable-filters) to limit the length of the name if needed. ::: ##### Custom Fields Releases support Custom Fields which can be used to configure the name of an ephemeral environment. See [Using custom fields in releases](/docs/releases/creating-a-release#custom-fields) for more information. :::div{.hint} Remember that custom fields referenced in your Environment Name Template must be provided with any release that you use to create an ephemeral environment. ::: #### Examples | Template | Release Data | Environment name | Notes | | ----------------------------------------------------------------------------------------------- | ------------------------------------------------ | -------------------- | --------------------------------------------------------------------------------------------------- | | `#{Octopus.Release.Git.BranchName}` | Branch: `ava/my-new-feature` | `ava-my-new-feature` | The `Octopus.Release.Git.BranchName` variable is only supported for projects using version control. | | `pr-#{Octopus.Release.CustomFields[PullRequestNumber]}` | PullRequestNumber: 451 | `pr-451` | Provide the `PullRequestNumber` custom field when creating the release. | | `#{Octopus.Release.CustomFields[TeamName]} - #{Octopus.Release.CustomFields[JiraTicketNumber]}` | TeamName: `My Team`, JiraTicketNumber: `TST-150` | `My Team - TST-150` | Multiple custom fields from a release can be combined. | ### Provisioning Any infrastructure required by an ephemeral environment can be created using a Runbook. Octopus will automatically run this Runbook as needed before deploying a release to the environment. Create a new runbook, select an existing one or select to skip provisioning and click Next. ### Deprovisioning Any infrastructure used by an ephemeral environment can be removed using a Runbook. Octopus will automatically run this Runbook as needed as part of deprovisioning the environment. Create a new runbook, select an existing one or select to skip deprovisioning and click Next. ### Review and confirm Review the selected configuration and click Confirm. You can go back and adjust before confirming. ![Confirming the configuration of ephemeral environments for a project](/docs/projects/ephemeral-environments/confirm-ephemeral-environments-configuration.png) Ephemeral environments are now configured for your project. A new channel has been created in the project to manage the creation and deployment of ephemeral environments. Click **Got it** to continue to creating a new environment from a release. ![Ephemeral environments successfully configured for a project](/docs/projects/ephemeral-environments/ephemeral-environments-configured.png) ## Creating an ephemeral environment ### Automatically Octopus can automatically create an ephemeral environment when a release is created in the channel configured for ephemeral environments if automatic deployments is selected. To create an ephemeral environment, create a release in the new channel configured within the project. Octopus will create a new environment and deploy the release to it. A release can be created using the: - Octopus Web Portal - Octopus API - [`OctopusDeploy/create-release-action` GitHub Action](https://github.com/OctopusDeploy/create-release-action) - [Octopus CLI](/docs/octopus-rest-api/cli) :::div{.hint} **Tip:** Remember to provide any custom fields with the release that are used in the environment name template. ::: :::div{.warning} Support for providing custom fields is not yet available in the Octopus CLI. ::: ### Manually If automatic deployment is not selected, ephemeral environments can be created using the: - Octopus Web Portal - Octopus API - [`OctopusDeploy/create-ephemeral-environment` GitHub Action](https://github.com/OctopusDeploy/create-ephemeral-environment) - [Octopus CLI](/docs/octopus-rest-api/cli) To manually create an ephemeral environment in the Octopus portal, visit the Ephemeral Environments page within the project then: - Select **Add Ephemeral Environment** from the Ephemeral Environments page. - Enter a name for the environment. The environment will now be created in the Not Provisioned state, ready for a release to be deployed to it. Provisioning will be performed automatically by the configured runbook when a release is deployed to the environment. ## Provisioning infrastructure Infrastructure required for an ephemeral environment can be provisioned using a runbook. Octopus will automatically run this runbook before the first deployment to the environment. - For projects using runbooks stored in Octopus the published snapshot will be used to run the runbook. - For projects using runbooks stored in version control, the Git reference from the release will be used to run the runbook. ## Viewing ephemeral environments To view ephemeral environments: - Select **Deploy** from the main navigation in the Octopus Web Portal and select your project. - Select the Ephemeral Environments navigation menu in the sidebar. Environments can be filtered by name and by the current state of the environment for the project. ![Filtering the ephemeral environments used within a project by the name of the environment](/docs/projects/ephemeral-environments/viewing-ephemeral-environments.png) ## Updating an existing environment ### Automatic Deployments Create another release that results in the same environment name based on the environment name template. The release will be automatically deployed into the environment. ### Manual Deployments Create a new release, then deploy it and select the existing environment in the Deploy to step. Octopus deploys the release to that environment without creating a new one. ## Deprovisioning an environment When an ephemeral environment is no longer needed it can be deprovisioned and any infrastructure removed. Octopus will run the selected deprovisioning runbook before (optionally) removing the environment. - For projects using runbooks stored in Octopus the published snapshot will be used to run the runbook. - For projects using runbooks stored in version control, the Git reference used to provision the environment will be used to run the runbook. Ephemeral environments can be deprovisioned via the: - Octopus Web Portal - Octopus API - [`OctopusDeploy/deprovision-ephemeral-environment` GitHub Action](https://github.com/OctopusDeploy/deprovision-ephemeral-environment) - [Octopus CLI](/docs/octopus-rest-api/cli) To deprovision an ephemeral environment in the Octopus portal: - Select **Deploy** from the main navigation in the Octopus Web Portal and select your project. - Select the **Ephemeral Environments** navigation menu in the sidebar. - Click the menu next to the environment to deprovision and select **Deprovision Environment**. - Select whether to keep the environment in Octopus or remove it after deprovisioning. - Click **Deprovision** ![Deprovisioning an ephemeral environments from within a project](/docs/projects/ephemeral-environments/deprovision-ephemeral-environment.png) ### Automatic deprovisioning of environments Ephemeral environments can be automatically deprovisioned if they are inactive after a configurable time period. Deploying a release to an environment or running a runbook against an environment marks the environment as still being active. :::div{.hint} By default, ephemeral environments are removed after 7 days of inactivity. ::: To configure automatic deprovisioning of environments: - Select **Deploy** from the main navigation in the Octopus Web Portal. - Select **Environments** from the sub-navigation. - Find the parent environment, click the menu and select **Edit**. - Edit the **Automatic Deprovisioning** value. :::div{.warning} Automatic deprovisioning can be disabled, however it is recommended to enable it to ensure that environments are removed when they are no longer in use, reducing associated infrastructure costs. ::: ## Changing ephemeral environment settings To change the ephemeral environment settings for a project: - Select **Deploy** from the main navigation in the Octopus Web Portal and select your project. - Select the **Ephemeral Environments** navigation menu in the sidebar. - Select the **Settings** tab. ![Viewing ephemeral environments within a project](/docs/projects/ephemeral-environments/ephemeral-environment-settings.png) ## Using multiple projects with ephemeral environments Ephemeral environments can be used by multiple projects in the same way that other environments in Octopus can be used. Important notes for using multiple projects: - Each project using the same ephemeral environment must be configured to use the same parent environment. - Ephemeral environments are shared across a space, and the name must be unique. - If the environment name created from a release using the template is the same as an existing environment used in another project, a new environment will not be created. - When deprovisioning an ephemeral environment being used by multiple projects, an option can be selected to deprovision and remove the entire environment, or only deprovision the current project and leave the environment in Octopus. ## Release retention Releases in channels configured for ephemeral environments have a retention policy of 3 days. Releases currently deployed to an ephemeral environment will be kept. ## Limitations The following limitations currently apply to the use of the Ephemeral Environments feature: - Ephemeral environments cannot be deployed to tenants. - Parent environments cannot be connected to tenants. - Ephemeral environments and parent environments cannot be used within lifecycles, deployment freezes and insights reports. ## Availability Ephemeral environments are available to all cloud and self-hosted customers from version `2025.4` # Feature Toggles Source: https://octopus.com/docs/feature-toggles.md Octopus Feature Toggles support toggling features on or off in real-time, without redeploying, and progressively releasing changes to subsets of your users. :::div{.hint} Octopus Feature Toggles are currently in Alpha, available to a small set of customers. If you are interested in this feature please register your interest on the [roadmap card](https://roadmap.octopus.com/c/121-feature-toggles) and we'll keep you updated. ::: ## Usage ### Create a Feature Toggle Feature Toggles are located within Octopus Projects: **Project ➜ Feature Toggles** Create a new Toggle and give it name. ![New toggle name](/docs/img/feature-toggles/new-toggle-name.png) ### Configure OpenFeature in your client application {#configure-open-feature-client-app} Octopus Feature Toggles rely on [OpenFeature](https://openfeature.dev/) as the client SDK. Follow the [OpenFeature guide for installing the SDK for your language](https://openfeature.dev/ecosystem?instant_search%5BrefinementList%5D%5Btype%5D%5B0%5D=SDK) into your application. Configure OpenFeature to use the [Octopus Provider](/docs/feature-toggles/providers). The Octopus OpenFeature Provider requires a client identifier when instantiated. This is a [JWT](https://jwt.io/introduction) which specifies the Octopus Project, Environment, and Tenant (if applicable). This tells the Octopus Feature Toggle service which set of toggles to evaluate. :::div{.hint} The Octopus Feature Toggle client identifier is available via the Octopus variable `Octopus.FeatureToggles.ClientIdentifier` or via the Feature Toggle UI (see below). ::: For applications deployed by Octopus, the recommended way is to have Octopus inject the client identifier as part of deployment, for example by injecting it into a configuration file or environment variable. The client identifier is made available via the Octopus variable `Octopus.FeatureToggles.ClientIdentifier`. For applications not deployed by Octopus, or cannot have the client identifier supplied during deployment for any reason, the client identifier can be obtained via the portal UI, as shown below. ![Client identifier preview menu item](/docs/img/feature-toggles/client-identifier-preview-menu-item.png) ![Client identifier preview UI](/docs/img/feature-toggles/client-identifier-preview.png) The previewed client identifier may then be copied into your application configuration. For example, an ASP.NET application could have an `appsettings.json` file which contained the following: ```json { "FeatureToggles": { "ClientId": "#{Octopus.FeatureToggles.ClientIdentifier}" } } ``` This would be transformed during deployment by Octopus to contain the correct client identifier for the current Project and Environment. This would then be used during application startup to configure the OpenFeature with the Octopus Provider, similar to: ```cs // Retrieve client identifier from config var builder = WebApplication.CreateBuilder(args); var octopusFeatureTogglesClientId = builder.Configuration["FeatureToggles:ClientId"] ?? ""; // Instantiate the Octopus Provider var octopusProvider = new OctopusFeatureProvider(new OctopusFeatureConfiguration(octopusFeatureTogglesClientId)); // Set Octopus as the OpenFeature provider await OpenFeature.Api.Instance.SetProviderAsync(octopusProvider); ``` ### Evaluate a Toggle The [Provider](/docs/feature-toggles/providers) for each language documents how to evaluate toggles. You will need the Toggle slug in order to reference the toggle in code. This can be found in the Octopus portal: ![Feature Toggle slug](/docs/img/feature-toggles/feature-toggle-slug.png) Below is an example of evaluating the toggle with slug `dark-mode` in C#: ```cs var darkModeEnabled = await featureClient.GetBooleanValueAsync("dark-mode", false); ``` The second argument is the default value. Read more about [default values](#default-values) below. ### Rollout To enable your toggle for an environment, add the environment to the Toggle. ![Add Environment button](/docs/img/feature-toggles/add-environment-button.png) Select your environment, and whether you want the toggle on or off. ![Add Environment dialog](/docs/img/feature-toggles/add-environment-dialog.png) You can additionally control rollout within an environment. See [Feature Toggle targeting](/docs/feature-toggles/targeting) for more information. ## Default Values {#default-values} Toggle default values are configured both on the Toggle in Octopus, and at the evaluation site in your client application. It's important to understand how these interact. The default value on the Toggle in Octopus will be returned if the environment being evaluated has not been configured with an explicit value. In the example below, the `Production` and `Staging` environments have values configured. The default value for the Toggle is `Off`. If an evaluation is made by an application running in the `Development` environment, or any other environment not configured, it would receive the default value (`Off`). ![Default Values](/docs/img/feature-toggles/default-values.png) The default value supplied in client code (the `false` argument in the example below) will only be used if the Octopus Feature Toggle service cannot be reached, for example if there are network issues or the service is unavailable. ```cs var darkModeEnabled = await featureClient.GetBooleanValueAsync("dark-mode", false); ``` # Targeting Source: https://octopus.com/docs/feature-toggles/targeting.md > Configure rollout within an environment ## Minimum version You can configure a feature toggle to require a minimum version before being enabled in an environment. The toggle will be enabled once that version (or any later version) is deployed to that environment. ![Screenshot of feature toggle environment drawer, minimum version section is expanded, minimum version input has value 12.1.0](/docs/img/feature-toggles/minimum-version.png) ## Tenants {#tenants} If your project uses [Tenants](/docs/tenants/), you can configure a feature toggle to be enabled for subsets of your tenants within an environment. There are many options for configuring a feature toggle for tenants. These are all modeled in Octopus and do not require any custom configuration in application code. ### Excluded tenants You can exclude tenants individually or exclude groups of tenants using tenant tags. Excluded tenants will always override any other rollout configuration, even if the rollout percentage is set to 100%. ![Screenshot of feature toggle environment drawer, excluded tenants section expanded, tenants to exclude multi-select has Acme Corporation and Contoso Ltd tenants selected, tenant tags to exclude multi-select has Tier/Enterprise tag selected](/docs/img/feature-toggles/excluded-tenants.png) Once a feature toggle is configured with an excluded tenant tag, any changes to tenant tag assignments apply immediately to feature toggles (no update to the feature toggle configuration is required). ### Included tenants You can enable feature toggles for individual tenants. These tenants will always be included, even if the tenant rollout percentage is set to 0%. ![Screenshot of feature toggle environment drawer, included tenants section expanded, tenants to include multi-select has Acme Corporation selected, tenant rollout percentage and tenant tags to include in rollout inputs are empty](/docs/img/feature-toggles/included-tenants.png) ### Tenant rollout Tenant rollout allows you to enable a toggle for a pseudorandom percentage of all tenants. The included tenants are determined using a MurmurHash of the tenant ID and a toggle-specific key. This guarantees deterministic evaluation for any given tenant and toggle combination, while ensuring that a different set of tenants is included for each toggle. ![Screenshot of feature toggle environment drawer, included tenants section expanded, tenants to include is empty, tenant rollout percentage is set to 50% and tenant tags to include in rollout multi-select has Tier/Free, Region/Australia, and Region/Europe selected](/docs/img/feature-toggles/tenant-rollout.png) You can further restrict the rollout using tenant tags. For example, if you specify `Region/Australia`, `Region/Europe`, and `Tier/Enterprise`, a 50% rollout will only apply to 50% of Enterprise tenants in either Australia or Europe. If you do not specify any tenant tags, a 50% rollout will apply to 50% of all tenants. As with excluded tenants, any changes to tenant tags apply immediately to feature toggles. ## Client rollout Client rollout allows you to enable a toggle for a random percentage of your application users. The included users are determined using a MurmurHash of the [OpenFeature Targeting Key](https://openfeature.dev/docs/reference/concepts/evaluation-context#targeting-key) and a toggle-specific key. This guarantees deterministic evaluation for any given user and toggle combination. To use client rollout, you must configure a targeting key in your OpenFeature client. Refer to the [OpenFeature SDK](https://openfeature.dev/docs/reference/sdks/) documentation for your development language for details on how to set the targeting key. If you do not set a targeting key, the feature toggle will not be enabled for any users unless the rollout is set to 100%. You should set the targeting key to a value that uniquely identifies your evaluation subject. For most applications, this will be a user identifier (such as a user ID), but you can use any identifier that suits your needs (for example, team, region, or server instance). :::div{.warning} Support for client rollout percentages was added in the following versions of the [provider libraries](/docs/feature-toggles/providers): - [.NET: `2.1.0`](https://github.com/OctopusDeploy/openfeature-provider-dotnet/releases/tag/v2.1.0) - [Java: `0.3.0`](https://github.com/OctopusDeploy/openfeature-provider-java/releases/tag/0.3.0) - [TypeScript/JavaScript: `3.0.0`](https://github.com/OctopusDeploy/openfeature-provider-ts-web/releases/tag/v3.0.0) If you are not running the required minimum version of these libraries, the rollout percentage will be ignored (and handled as if the value was set to 100%). ::: ### Segments {#segments} Segments allow you to further refine the client rollout for a toggle based on data that is not modeled in Octopus. Segments are key/value pairs, and are supplied by your application via the [OpenFeature evaluation context](https://openfeature.dev/docs/reference/concepts/evaluation-context). Like tenant targeting, segments are configured per-environment on the feature toggle in Octopus. ![Screenshot of feature toggle environment drawer, client rollout section expanded, client rollout percentage is set to 50% with 3 segments region/eu, region/au, and ring/early-adopter](/docs/img/feature-toggles/segments.png) Refer to the [OpenFeature SDK](https://openfeature.dev/docs/reference/sdks/) documentation for your development language for more details on how to set these values. Common segment examples include: - Specific users (e.g. `user-id/123456`) - Specific accounts (e.g. `account-id/123456`) - License types (e.g. `license-type/free`) - Geographic regions (e.g. `region/eu`) - Rollout rings (e.g. `ring/early-adopter`) A feature toggle evaluation will match on segments if the evaluation context matches at least one segment for each key. If you specify `ring/early-adopter`, `region/eu`, and `region/au`, the rollout will apply only to users in the early-adopter ring who are in either the EU or AU regions. If you do not specify any segments, the rollout will apply to all users. Some examples: | Segments | Evaluation Context | Result | |------------------------------------------------|-----------------------------------|--------| | `user-id/123456` | `user-id/123456` | `On` | | `user-id/123456` | `user-id/789383` | `Off` | | `ring/early-adopter`, `region/eu`, `region/au` | `ring/early-adopter`, `region/eu` | `On` | | `ring/early-adopter`, `region/eu`, `region/au` | `ring/early-adopter`, `region/au` | `On` | | `ring/early-adopter`, `region/eu`, `region/au` | `license-type/free` | `Off` | | `ring/early-adopter`, `region/eu`, `region/au` | `region/au` | `Off` | # OpenFeature Providers Source: https://octopus.com/docs/feature-toggles/providers.md These are the available Octopus OpenFeature provider SDKs. Getting started instructions for each of these providers is documented in the README files in the repositories. ## Server SDKs - [.NET](https://github.com/OctopusDeploy/openfeature-provider-dotnet) - [Java](https://github.com/OctopusDeploy/openfeature-provider-java) ## Web SDKs - [TypeScript/JavaScript](https://github.com/OctopusDeploy/openfeature-provider-ts-web) # File Storage Source: https://octopus.com/docs/installation/file-storage.md Octopus stores several files that are not suitable to store in the database. These include: - Packages used by the [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository). These packages can often be very large in size. - [Artifacts](/docs/projects/deployment-process/artifacts) collected during a deployment. Teams using Octopus sometimes use this feature to collect large log files and other files from machines during a deployment. - Task logs are text files that store all log output from deployments and other tasks. - Imported zip files used by the [Export/Import Projects feature](/docs/projects/export-import). - Archived audit logs by the [Archived audit logs feature](/docs/security/users-and-teams/auditing/#archived-audit-events). These files must be stored on a file system. That folder can be located directly on the Windows Server hosting Octopus Deploy, however that is not something we recommend. Especially if you want to host Octopus Deploy in a container (all the files would be destroyed when the container is destroyed). Octopus Deploy supports network file shares, as well as many cloud providers' storage solutions. Whichever storage solution you opt for, it must meet the following requirements: - Support the SMB or CIFS protocols. - Be located in the same data center as the servers/container hosts that host Octopus Deploy. This section provides configuration walkthroughs for the popular storage options our customers use. - [Local File Storage](/docs/installation/file-storage/local-storage) - [AWS File Storage](/docs/installation/file-storage/aws-file-storage) - [Azure File Storage](/docs/installation/file-storage/azure-file-storage) - [GCP File Storage](/docs/installation/file-storage/gcp-file-storage) - [NFS File Storage](/docs/installation/file-storage/windows-nfs) # Getting started with Octopus Source: https://octopus.com/docs/getting-started.md > An overview of Octopus Deploy concepts Getting started with Octopus Deploy is straightforward, and the product will guide you through most of the initial setup and get you deploying in minutes. However, to truly take advantage of Octopus, it helps to have some background on the key concepts. When you are ready, create a free account to explore Octopus. Start for free Octopus Cloud is the easiest way to get started with Octopus Deploy, and we take care of everything for you. Alternatively, if you require a self-managed CD solution, you can [download Octopus Server](https://octopus.com/downloads) and run it on your own setup. The [installation guide](/docs/installation) provides instructions for downloading, installing, and configuring your Octopus Deploy Server. ## Octopus in your software delivery pipeline Octopus is designed as a dedicated, best-of-breed Continuous Delivery platform with a focus on releasing and deploying software, in the spirit of "do one thing, *really* well". We don't aim to solve the entire software delivery pipeline, but we focus on going deeper on release & deploy than any other solution. :::figure ![Octopus in a deployment pipeline](/docs/img/getting-started/octopus-in-pipeline.png) ::: Octopus assumes you already have a CI system up and running, and we provide first-class integrations with all major CI systems on the market. You'll find guides for integrating Octopus with: - GitHub Actions - GitLab - Circle CI - Jenkins - JetBrains TeamCity - BuildKite - Azure DevOps The job of the CI system is to take source code and turn it into an artifact that can be deployed. To do this, CI systems monitor source control for changes, then run jobs like compiling code and running tests in order to give fast feedback to developers. The final output of the CI system will be one or more **packages** or **containers** that are ready to be deployed. Containers are usually built and published to a Docker registry, usually the registry provided by the cloud you are deploying to. Packages like ZIP or JAR files can be pushed directly to Octopus's built in package repository. Alternatively, if you already have an external artifact repository like JFrog Artifactory, you can use that. ## Projects, environments, and releases The first page you will see in Octopus is called the Dashboard. Initially yours will be empty, but as you add start deploying applications, it will fill up. :::figure ![Octopus Dashboard](/docs/img/getting-started/dashboard.png) ::: The Dashboard shows the three main building blocks of Octopus. **[Projects](/docs/projects)** are the applications we deploy. In the image above, "Database", "Product API", and "Shopping Cart API" are the projects. A project has all the information needed to deploy an application - or often, a really large system composed of many applications that are delivered at the same time. **Environments** are where we deploy the applications. In this case, Dev, Test and Production. In the middle of the grid, you'll see **Releases**. A Release is a bundle of all the things needed to deploy a specific version of an application. This might include: - The container images or packages (artifacts produced from a CI build) - The associated configuration and variables needed to configure the release for each environment - A snapshot of the process that will be used to deploy the release, as the process may change in future releases - Details on Jira tickets and Git commits that went into the release Many releases get created for a project - often each time a CI build completes - and those releases can then be *deployed* to an environment. When a release is deployed to an environment, Octopus calls that a Deployment. Software teams often use [release and deployment interchangeably](https://octopus.com/devops/continuous-delivery/deployments-vs-releases/), but in our opinion they have subtly different meanings. Creating releases is normally done automatically at the end of a CI process using one of our CI integrations. ## Deployment process Inside each project, you'll configure a Deployment Process. The deployment process is like the recipe for deploying the project - the steps that will be run. :::figure ![Octopus Deployment Process](/docs/img/shared-content/concepts/images/deployment-process.png) ::: Each step contains a specific action (or set of actions) that is executed as part of the deployment process each time your software is deployed. After the initial setup, your deployment process shouldn't change between deployments even though the software being deployed will change as part of the development process. Octopus Deploy provides a range of built-in step templates that can be included in your deployment processes. You can even create your own custom steps. Learn more about the [deployment process](/docs/projects/deployment-process/) and see some example [deployments](/docs/deployments). The deployment process and many other parts of a project can be stored in a Git repository. Learn more about [Config as Code](/docs/projects/version-control). ### Variables As you deploy your applications between different environments, you'll need to change their configuration files based on the scope of the deployment. Octopus has advanced support for managing and scoping variables. For instance, your test environment shouldn't have access to your production database. Using variables, you can specify a different database for each environment, ensuring your production data won't be impacted by codes changes that are still in review. :::figure ![Octopus Variables](/docs/img/shared-content/concepts/images/variables.png) ::: Learn more about [variables](/docs/projects/variables/) and advanced [configuration features](/docs/projects/steps/configuration-features). ## Infrastructure Octopus Deploy organizes your deployment targets (the machines and services you deploy software to) into groups called environments. Typical environments are **Dev**, **Test**, and **Production**. :::figure ![The infrastructure tab of Octopus Deploy](/docs/img/shared-content/concepts/images/infrastructure.png) ::: Organizing your infrastructure into environments lets you define your deployment processes (no matter how many deployment targets are involved) and have Octopus deploy the right versions of your software, with the right configuration, to the right environments at the right time. Learn more about managing your [infrastructure](/docs/infrastructure). ## Lifecycles When you define a project, you also select a lifecycle. The lifecycle defines the promotion rules around how releases of the project are deployed between environments, which projects are deployed to which environments. Lifecycles are defined by phases, each phase can have one or more environments, and each environment can be defined as an automatic deployment environment or a manual deployment environment. Each phase can have a set number of environments that must be released to before the next phase is available for deployment. Learn more about [lifecycles](/docs/releases/lifecycles). ## Runbook automation A deployment is only one phase in the life of an application. There are many other tasks that are performed to keep an application operating - often called "Day 2". Octopus Runbooks live inside a Project, and can be used to automate routine maintenance and emergency operations tasks like infrastructure provisioning, database management, and website failover and restoration. They can also be used to grant application developers the ability to do special things like "restart a Kubernetes pod that has frozen", without giving them direct production cluster access. Learn more about [Octopus Runbooks](/docs/runbooks). ## Tenants Tenants in Octopus allow you to easily create **customer specific** deployment pipelines **without duplicating project configuration**. One example of using Tenants would be when your application is a SaaS platform, where each Tenant has their own running instance of your application, possibly with their own infrastructure. A Tenant allows you to model the infrastructure that belongs to the customer, as well as customer-specific configuration variables, and use that data across multiple projects. Another example where Tenants come in handy is "edge" scenarios, where you deploy software to many remote locations - for example, restaurants around the country, or retail stores around the world, where each store has their own servers or MicroK8s cluster that you will be deploying to. If a project uses tenants, a release can be deployed to all tenants, a single tenant, or a group of tenants using tags. Learn more about [tenants](/docs/tenants). ## Spaces If you're a large organization with lots of teams working in Octopus, you can use the Spaces feature to provide each of your teams with a space for the projects, environments, and infrastructure they work with, while keeping other teams' assets separate in their own spaces. Learn more about [Octopus Spaces](/docs/administration/spaces). # Insights Source: https://octopus.com/docs/insights.md > Deployment metrics to help celebrate wins or find improvements DevOps insights in Octopus gives you better visibility into your company's DevOps performance by surfacing the four key DORA metrics, so you can make more informed decisions on where to improve and celebrate your results. :::figure ![The Overview page of Insights Reports](/docs/img/insights/images/overview.png) ::: Two levels are available for DevOps Insights: 1. Project level insights, available to all customers. 2. Space level insights, available to customers with an [enterprise subscription](https://octopus.com/pricing). ## What are DORA metrics? [DORA](https://dora.dev/) (DevOps Research and Assessment) is the team behind the Accelerate State of DevOps Report 2, a survey of more than 32,000 professionals from around the world. Their research has linked the technical and cultural capabilities that drive software delivery and organizational performance. DORA recommends an approach to measuring software delivery that relies on five metrics: _Throughput_ - Lead time for changes (LT) - Deployment frequency (DF) _Stability_ - Change failure rate (CFR) - Mean time to recovery (MTTR) _Operations_ - Reliability Throughput metrics measure the health of your deployment pipeline, while the stability indicators help you understand the quality of your software and delivery pipeline. In addition to the four classic DORA metrics that measure software delivery performance, DORA added a new measure in 2021 for operational performance. ## Octopus built-in DORA metrics with Insights Octopus adds out-of-the-box support for the following DevOps metrics: **Deployment Lead Time** The time between the creation date of the release immediately following the previously successful release and the completion date of the deployment. **Deployment Failure Rate** The percentage of deployments that fail to deploy, require guided failure or have their release marked as "Do not promote" **Deployment Frequency** How many deployments occur to the target environment. **Mean Time to Recovery** How long it takes to recover a failed deployment with a subsequent successful deployment. :::div{.hint} Some of these metrics differ slightly from the textbook DORA metrics given the data available. ::: Together these metrics help you qualify the results of your DevOps performance, as well as gain insights into areas for future improvement. - Get better visibility into the performance of your projects and teams - Eliminate “gut feel” and enable data-informed decisions to drive improvement and determine if a new process is working - Review and collect data over time to highlight the path to delivering greater value, faster - Help introduce change with data and collaboration to make a business case - Share successes and learn from failures to continuously improve ## Understand performance of your projects with project-level Insights Project level insights are available as a new tab in every project so you can understand the performance of your projects across Channels, Environments, and Tenants. Each metric can be seen at a summary level, and insights can also be filtered to time frames, including last month, quarter, year, channels, and environments, as well as being exported into CSV. Project level insights are available to all customers out-of-the-box, meaning you don't have to buy or subscribe to another tool. If you're already a user, Octopus has all the data it needs to help you uncover rich insights based on your deployment history. :::figure ![Project Insights Deployment Frequency](/docs/img/insights/images/project.png) ::: ## Gain insight across projects and teams with the Space level insights Octopus also includes additional insights capabilities for customers with an [enterprise subscription](https://octopus.com/pricing). For customers at larger companies, we have built additional capabilities that make it easier to gain insight using DORA metrics in larger multi-team, multi-site and multi-project scenarios. Space level insights are available via the Insights tab and provide actionable DORA metrics for more complex scenarios across projects, project groups, environments, or tenants. This enables managers and decision-makers to get far more insight into the DevOps performance of their organization in line with their business context, such as team, portfolio, or platform. Space level insights: - Aggregate data across your space so you can compare and contrast metrics across projects to identify what is working and what isn't - Inform better decision making: identify problems, track improvements, and celebrate successes - Help you quantify DevOps performance based on what is actually happening, as shown by data With the Space level insights, you can build reports with the relevant data that matters to you, choosing only relevant projects, project groups, environments, and tenants. View information at a high level or drill down into project-specific data over time. This also allows you to easily identify contributors to a performance trend you see at the aggregated level. It provides a breakdown of the highest and lowest performing projects and releases that may significantly impact the overall performance. This enables you to identify where things are improving or declining and take action based on that. :::div{.hint} Users who have view permissions to Space-level Insights reports will see sanitized data on projects / environments they don't have access to. ::: Space level DevOps insights are available to customers with an [enterprise subscription](https://octopus.com/pricing). ## Older versions Project and space level insights are available from Octopus **2022.3** onwards. # Installation of Octopus Server Source: https://octopus.com/docs/installation.md > How to install Octopus Server The Octopus Server is the deployment automation server where you define your deployment processes and runbooks and manage the releases of your software. The Octopus Server includes the Octopus Rest API and the Octopus Web Portal. :::figure ![Octopus Dashboard](/docs/img/getting-started/dashboard.png) ::: You can use the Octopus REST API or the Octopus Web Portal to design your deployment processes and your releases, connect to the servers, services, and accounts where your software will be deployed, and to use runbooks to automate routine maintenance and emergency operations tasks like infrastructure provisioning, database management, and website failover and restoration. ## Octopus Components There are three components to an Octopus Deploy instance: - **Octopus Server Service** This service serves user traffic and orchestrates deployments. Octopus Deploy supports running the service on Windows Server or as a Linux Container. - **SQL Server Database** Most data used by the Octopus Server nodes is stored in this database. SQL Server 2016+ or Azure SQL is required. - **Files or BLOB Storage** Some larger files - like [packages](/docs/packaging-applications/package-repositories), artifacts, and deployment task logs - aren't suitable to be stored in the database and are stored on the file system instead. This can be a local folder, a network file share, or a cloud provider's storage. All inbound traffic to Octopus Deploy is via: - HTTP/HTTPS (ports 80/443) - Polling tentacles (port 10943) - gRPC (port 8443) For production instances of Octopus Deploy, it is best to configure a [load balancer](/docs/installation/load-balancers) to route traffic to your instance. Leveraging a load balancer offers numerous benefits, such as redirecting users to a maintenance page while the instance is down for upgrading, as well as making it much easier to configure High Availability later. ## Self-hosted Octopus Server When installed, the self-hosted Octopus Server: - Runs - As a Windows Service called **OctopusDeploy**, installed via an MSI. - In a [Linux](/docs/installation/octopus-server-linux-container) container. - Stores its data in an [SQL Server Database](/docs/installation/sql-server-database/). ([SQL Server Express](https://oc.to/downloadsqlserverexpress) is an easy way of getting started.) - Includes an embedded HTTP server which serves the [Octopus REST API](/docs/octopus-rest-api/) and the **Octopus Web Portal** that you will use to manage your [infrastructure](/docs/infrastructure/), [deployments](/docs/projects/deployment-process/), [runbooks](/docs/runbooks/), and coordinate your [releases](/docs/releases). Before you install Octopus Deploy, review the software and hardware [requirements](/docs/installation/requirements/), and make sure you have access to an instance of [SQL Server Database](/docs/installation/sql-server-database) that you can use with Octopus Deploy. ## Supported Octopus Deploy Server Versions Each self-hosted major.minor release of Octopus Deploy will receive *critical patches and support* for a period of **six months.** For example, 2025.4 was released in December 2025 and will be supported through May 2026. All new releases of Octopus Deploy will run in Octopus Cloud first for at least one quarter. As a result, Octopus Cloud is always at least one version ahead of the self-hosted version. Because of that, we always recommend using the latest available release for your self-hosted installation of Octopus. Please see the [Octopus.com/downloads](https://octopus.com/downloads) to download the latest version of Octopus Deploy. For more details, please refer to our [blog post announcement from 2020](https://octopus.com/blog/releases-and-lts), when we introduced this release cadence. ## Install Octopus as a Windows Service \{#install-octopus} - [Download](https://Octopus.com/downloads/server) the Octopus installer. - Start the Octopus Installer, click **Next**, accept the **Terms in the License Agreement** and click **Next**. - Accept the default **Destination Folder** or choose a different location and click **Next**. - Click **Install**, and give the app permission to **make changes to your device**. - Click **Finish** to exit the installation wizard and launch the **Getting started wizard** to configure your Octopus Server. - Click **Get started...** and either enter your details to start a free trial of Octopus Deploy or enter your **license key** and click **Next**. - Accept the default **Home Directory** or enter a location of your choice and click **Next**. - Decide whether to use a **Local System Account** or a **Custom Domain Account**. Learn more about the [permissions required for the Octopus Windows Service](/docs/installation/permissions-for-the-octopus-windows-service/) or using a [Managed Service Account](/docs/installation/managed-service-account). - On the **Database** page, click the drop-down arrow in the **Server Name** field to detect the SQL Server Database. Octopus will create the database for you which is the recommended process; however, you can also [create your own database](/docs/installation/sql-server-database/#creating-the-database). - Enter a name for the database, and click **Next** and **OK** to **create the database**. Be careful **not** to use the name of an existing database as the setup process will install Octopus into that pre-existing database. - Accept the default port and directory or enter your own and click **Next**. - If you're using **username and passwords stored in Octopus** authentication mode, enter the username and password that will be used for the Octopus administrator. If you are using [active directory](/docs/security/authentication/active-directory), enter the active directory user details. You can configure additional [Authentication Providers](/docs/security/authentication) for the Octopus Server after the server has been installed. - Click **Install**. When the installation has completed, click **Finish** to launch the **Octopus Manager**. ### Octopus Manager Before you launch the **Octopus Web Portal**, it's worth taking note of the other settings such as controlling the Octopus Windows Service, importing and exporting the data Octopus stores in the SQL server, and viewing the Master Key. You can launch the Octopus Web Portal from the Octopus Manager, by clicking **Open in Browser**. ### Save your Master Key Under the storage section, you will see a link to **View Master Key**. When Octopus is installed, it generates a Master Key which is a random string that is used to encrypt sensitive data in your Octopus database. You will need the Master Key if you ever need to restore Octopus. Make a copy of the Master Key and save it in a **secure** location. :::div{.warning} **Warning** If you don't have a copy of your Master Key and your hardware fails, you will not be able to recover the encrypted data from the database. Make a copy of the **Master Key** and save it in a secure location. Hopefully you will never need it, but you'll be glad you have it if you ever do. Learn about [Recovering After Losing Your Octopus Server and Master Key](/docs/administration/managing-infrastructure/lost-master-key). ::: ## Run Octopus Server in a Container \{#run-octopus-in-container} Running Octopus Server inside a container lets you avoid installing Octopus directly on top of your infrastructure and makes getting up and running with Octopus as simple as a one line command. Upgrading to the latest version of Octopus is just a matter of running a new container with the new image version. We are confident in the Octopus Server Linux Container's reliability and performance. [Octopus Cloud](/docs/octopus-cloud) runs the Octopus Server Linux Container in AKS clusters in Azure. But to use the Octopus Server Linux Container in Octopus Cloud, we had to make some design decisions and level up our knowledge about Docker concepts. We recommend the use of the Octopus Server Linux Container if you are okay with **all** of these conditions: - You are familiar with Docker concepts, specifically around debugging containers, volume mounting, and networking. - You are comfortable with one of the underlying hosting technologies for Docker containers; Kubernetes, ACS, ECS, AKS, EKS, or Docker itself. - You understand Octopus Deploy is a stateful, not a stateless application, requiring additional monitoring. We publish `linux/amd64` Docker images for each Octopus Server release and they are available on [DockerHub](https://hub.docker.com/r/octopusdeploy/). This section includes information about different options to run the Octopus Server Linux Container. - [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) - [Migrating to the Octopus Server Linux Container](/docs/installation/octopus-server-linux-container/migration) - [Octopus Server Container in Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) - [Octopus Server Container with Docker Compose](/docs/installation/octopus-server-linux-container/docker-compose-linux) - [Octopus Server Container with systemd](/docs/installation/octopus-server-linux-container/systemd-service-definition) ## Launch the Octopus Web Portal Click **Open in browser** to launch the **Octopus Web Portal** and log in using the authentication details you set up during the configuration process. The **Octopus Web Portal** is where you'll manage your infrastructure, projects, deployment process, access the built-in repository, and manage your deployments and releases. ## Learn more - [Troubleshooting the Octopus installation](/docs/installation/troubleshooting) - [Configure your infrastructure](/docs/infrastructure) - [Upgrading guide](/docs/administration/upgrading) - [Automating Octopus installation](/docs/installation/automating-installation) # Installation Source: https://octopus.com/docs/kubernetes/live-object-status/installation.md The [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent) has a new component called the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) that is currently enabled for new installations. :::figure ![Kubernetes agent install script with the Kubernetes monitor enabled](/docs/img/kubernetes/live-object-status/kubernetes-agent-monitor-installation.png) ::: Once installed, you can confirm the status of the Kubernetes monitor by looking at the Connectivity page for the corresponding Kubernetes agent target. :::figure ![Health check showing status of the Kubernetes monitor](/docs/img/kubernetes/live-object-status/kubernetes-agent-monitor-health-check.png) ::: ## Upgrading an existing Kubernetes agent Coming soon, we are working on a one click upgrade process from within Octopus Deploy. If you can't wait until then, you can upgrade existing Kubernetes agents by running a Helm command on your cluster. Find the following values and replace them in the Helm command below | | Value | Example | | :----------- | :----------------------------------------------------------------------------------------------------: | :---------------------- | | INSTANCE_URL | The URL you access your instance with, without https:// or a trailing slash | my-instance.octopus.app | | API_KEY | An [API key](/docs/octopus-rest-api/how-to-create-an-api-key) for your user, created from your profile | API-MYKEY | | SPACE_ID | The ID of the space your agent is installed in, find this in any Octopus url | Spaces-1 | | AGENT_NAME | The name of the Kubernetes agent | My Agent | | HELM_RELEASE | The name of the Helm release used to install the Kubernetes agent | myagent | ```bash helm upgrade --atomic \ --namespace "octopus-agent-$AGENT_NAME" \ --reuse-values \ --version "2.*.*" \ --set kubernetesMonitor.enabled="true" \ --set kubernetesMonitor.registration.serverApiUrl="https://$INSTANCE_URL/" \ --set kubernetesMonitor.monitor.serverGrpcUrl="grpc://$INSTANCE_URL:8443" \ --set kubernetesMonitor.registration.serverAccessToken="$API_KEY" \ --set kubernetesMonitor.registration.spaceId="$SPACE_ID" \ --set kubernetesMonitor.registration.machineName="$AGENT_NAME" \ $HELM_RELEASE \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` ## Uninstalling the Kubernetes monitor If you need to disable the Kubernetes monitor temporarily, change the replica count to zero on the Kubernetes deployment called `$AGENT_NAME-kubernetesmonitor` in the Kubernetes Agent namespace. If you need to permanently uninstall the Kubernetes monitor, then find the following values and replace them in the Helm command below | | Value | Example | | :----------- | :---------------------------------------------------------------: | :------- | | AGENT_NAME | The name of the Kubernetes agent | My Agent | | HELM_RELEASE | The name of the Helm release used to install the Kubernetes agent | myagent | ```bash helm upgrade --atomic \ --namespace "octopus-agent-$AGENT_NAME" \ --reuse-values \ --version "2.*.*" \ --set kubernetesMonitor.enabled="false" \ $HELM_RELEASE \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` # Kubernetes agent Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent.md Kubernetes agent targets are a mechanism for executing [Kubernetes steps](/docs/kubernetes/steps) and monitoring application health from inside the target Kubernetes cluster, rather than via an external API connection. Similar to the [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle), the Kubernetes agent is a small, lightweight application that is installed into the target Kubernetes cluster. ## Benefits of the Kubernetes agent The Kubernetes agent provides a number of improvements over the [Kubernetes API](/docs/kubernetes/targets/kubernetes-api) target: ### Polling communication The agent uses the same [polling communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) protocol as [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle). It lets the agent initiate the connection from the cluster to Octopus Server, solving network access issues such as publicly addressable clusters. ### In-cluster authentication As the agent is already running inside the target cluster, Octopus Server no longer needs authentication credentials to the cluster to perform deployments. It can use the in-cluster authentication support of Kubernetes to run deployments using Kubernetes Service Accounts and Kubernetes RBAC local to the cluster. ### Application monitoring The agent also includes a component called the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) that monitors and reports back application health to Octopus Server. ### Cluster-aware tooling As the agent is running in the cluster, it can retrieve the cluster's version and correctly use tooling that's specific to that version. You also need a lot less tooling as there are no longer any requirements for custom authentication plugins. See the [agent tooling](#agent-tooling) section for more details. ## How the agent works When you install the agent, several resources will be created within a cluster, all running in the same namespace. Please refer to the diagram below (some details such as ServiceAccounts have been omitted). :::figure ![Kubernetes agent component diagram](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-diagram-components.png) ::: - Certain resource names may vary based on the target name and any overrides. - The NFS section to the right is only relevant for pre-version v3 of the agent - `tentacle-certificate` is only created if you provide your own certificate during installation. - `octopus-server-certificate` is created when you provide the full chain cert for communicating back to the Octopus Server (e.g. if it is self-signed). During a deployment, the agent generates temporary pods for each deployment task. These pods are not shown in the diagram above as they are not part of the installation process. Refer to the next diagram to understand how they are created and removed. :::figure ![Kubernetes agent how it works diagram](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-diagram-how-it-works.png) ::: 1. Octopus Tentacle, which runs inside the `kubernetes-agent-tentacle` pod, maintains a connection to the Octopus Server. 2. Prior to task execution, various files and tools are transferred from the Octopus Server to a shared storage location. This location will later be accessible by the tasks themselves. 3. Octopus Tentacle creates a new pod to run each individual task, where all user-defined operations will take place. 4. The pod is created. Multiple pods can run simultaneously, to accommodate various tasks within the cluster. 5. The task accesses the shared storage and retrieves any required tools or scripts. 6. The task is executed, and the customer application resources are created. 7. While the task is running, the Octopus Tentacle Pod streams the task pod logs back to the Octopus Server. Upon completion of the task, the pod will terminate itself. ## Requirements The Kubernetes agent follows [semantic versioning](https://semver.org/), so a major agent version is locked to a Octopus Server version range. Updating to the latest major agent version requires updating to a supported Octopus Server. The supported versions for each agent major version are: | Kubernetes agent | Octopus Server | Kubernetes cluster | | ----------------- | -------------------------- | -------------------- | | 1.0.0 - 1.16.1 | **2024.2.6580** or newer | **1.26** to **1.29** | | 1.17.0 - 1.19.2 | **2024.2.6580** or newer | **1.27** to **1.30** | | 1.20.0 - 1.21.0 | **2024.2.6580** or newer | **1.28** to **1.31** | | 1.22.0 - 1.24.0 * | **2024.2.6580** or newer | **1.29** to **1.32** | | 2.0.0 - 2.2.1 | **2024.2.9396** or newer | **1.26** to **1.29** | | 2.3.0 - 2.8.2 | **2024.2.9396** or newer | **1.27** to **1.30** | | 2.9.0 - 2.11.3 | **2024.2.9396** or newer | **1.28** to **1.31** | | 2.12.0 - 2.25.1 | **2024.2.9396** or newer | **1.29** to **1.32** | | 2.26.0 - 2.37.0 | **2024.2.9396** or newer | **1.30** to **1.33** | | 2.38.1 - 2.\*.\* | **2024.2.9396** or newer | **1.30** to **1.33** | | 3.0.0 - 3.\*.\* | **2024.2.9396** or newer † | **1.32** to **1.35** | \* Version 1 of the Kubernetes agent is not longer maintained. † Version 3 of the Kubernetes agent of the Kubernetes agent is compatible with this version, however it is only prompted as the installed version in **2026.2.7054** or newer. Additionally, the Kubernetes agent only supports **Linux AMD64** and **Linux ARM64** Kubernetes nodes. See our [support policy](/docs/kubernetes/targets/kubernetes-agent/supported-versions-policy) for more information. ## Installing the Kubernetes agent The Kubernetes agent is installed using [Helm](https://helm.sh) via the [octopusdeploy/kubernetes-agent](https://hub.docker.com/r/octopusdeploy/kubernetes-agent) chart. To simplify this, there is an installation wizard in Octopus to generate the required values. :::div{.warning} Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands. You can see the current kubectl config by executing: ```bash kubectl config view ``` ::: ### Configuration 1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target** 2. Select **KUBERNETES** and click **ADD** on the Kubernetes Agent card 3. This launches the Add New Kubernetes Agent dialog :::figure ![Kubernetes Agent Wizard Config Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png) ::: 1. Enter a unique display name for the target. This name is used to generate the Kubernetes namespace, as well as the Helm release name 2. Select at least one [environment](/docs/infrastructure/environments) for the target. 3. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. 4. Optionally, set the default namespace that resources are deployed to. This is only used if the step configuration or Kubernetes manifests don't specify a namespace. 5. Optionally, add the name of an existing [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for the agent to use. If the storage class supports the `ReadWriteMany` [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) and you wish to scale horizontally, check the checkbox below the storage class. If no storage class name is added, the default cluster storage class with `ReadWriteOnce` access mode will be used. :::div{.warning} As the display name is used for the Helm release name, this name must be unique for a given cluster. This means that if you have a Kubernetes agent and Kubernetes worker with the same name (e.g. `production`), then they will clash during installation. If you do want a Kubernetes agent and Kubernetes worker to have the same name, Then prepend the type to the name (e.g. `worker production` and `agent production`) during installation. This will install them with unique Helm release names, avoiding the clash. After installation, the worker & target names can then be changed in the Octopus Server UI to the desired name to remove the prefix. ::: #### Advanced settings :::figure ![Kubernetes Agent Advanced Settings Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png) ::: Choose if you want to install additional components, such as the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) or the [Permissions controller](/docs/kubernetes/targets/kubernetes-agent/granular-permissions) ### Installation helm command At the end of the wizard, Octopus generates a Helm command that you copy and paste into a terminal connected to the target cluster. After it's executed, Helm installs all the required resources and starts the agent. :::figure ![Kubernetes Agent Wizard Helm command Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png) ::: The helm command includes a 1 hour bearer token that is used when the agent first initializes, to register itself with Octopus Server. The terminal Kubernetes context must have enough permissions to create namespaces and install resources into that namespace. If you wish to install the agent into an existing namespace, remove the `--create-namespace` flag and change the value after `--namespace` If left open, the installation dialog waits for the agent to establish a connection and run a health check. Once successful, the Kubernetes agent target is ready for use! :::div{.hint} A successful health check indicates that deployments can successfully be executed. ::: ### Customizing the Helm command Look at the Helm chart [values.yaml](https://github.com/OctopusDeploy/helm-charts/blob/main/charts/kubernetes-agent/values.yaml) file for all the available options. The Kubernetes monitor is deployed as a sub-chart to the Kubernetes agent. [Available values for the monitor are available here](https://github.com/OctopusDeploy/helm-charts/blob/main/charts/kubernetes-agent/kubernetes-monitor.md). All Kubernetes monitor values should be nested under a `kubernetesMonitor` key when deployed with the Kubernetes agent chart. ## Configuring the agent with Tenants While the wizard doesn't support selecting Tenants or Tenant tags, the agent can be configured for tenanted deployments in two ways: - Use the Deployment Target settings UI at **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings** to add a Tenant and set the Tenanted Deployment Participation as required. This is done after the agent has successfully installed and registered. :::figure ![Kubernetes Agent ](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-settings-page-tenants.png) ::: - Set additional variables in the helm command to allow the agent to register itself with associated Tenants or Tenant tags. You also need to provider a value for the `TenantedDeploymentParticipation` value. Possible values are `Untenanted` (default), `Tenanted`, and `TenantedOrUntenanted`. example to add these values: ```bash --set agent.tenants="{,}" \ --set agent.tenantTags="{,}" \ --set agent.tenantedDeploymentParticipation="TenantedOrUntenanted" \ ``` :::div{.hint} You don't need to provide both Tenants and Tenant Tags, but you do need to provider the tenanted deployment participation value. ::: In a full command: ```bash helm upgrade --install --atomic \ --set agent.acceptEula="Y" \ --set agent.targetName="" \ --set agent.serverUrl="" \ --set agent.serverCommsAddress="" \ --set agent.space="Default" \ --set agent.targetEnvironments="{,}" \ --set agent.targetRoles="{,}" \ --set agent.tenants="{,}" \ --set agent.tenantTags="{,}" \ --set agent.tenantedDeploymentParticipation="TenantedOrUntenanted" \ --set agent.bearerToken="" \ --version "1.*.*" \ --create-namespace --namespace \ \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` ## Trusting custom/internal Octopus Server certificates :::div{.hint} Server certificate support was added in Kubernetes agent 1.7.0 ::: It is common for organizations to have their Octopus Deploy server hosted in an environment where it has an SSL/TLS certificate that is not part of the global certificate trust chain. As a result, the Kubernetes agent will fail to register with the target server due to certificate errors. A typical error looks like this: ```log 2024-06-21 04:12:01.4189 | ERROR | The following certificate errors were encountered when establishing the HTTPS connection to the server: RemoteCertificateNameMismatch, RemoteCertificateChainErrors Certificate subject name: CN=octopus.corp.domain Certificate thumbprint: 42983C1D517D597B74CDF23F054BBC106F4BB32F ``` To resolve this, you need to provide the Kubernetes agent with a base64-encoded string of the public key of either the self-signed certificate or root organization CA certificate in either `.pem` or `.crt` format. When viewed as text, this will look similar to this: ```text -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- ``` Once encoded, this string can be provided as part of the agent installation helm command via the `global.serverCertificate` (or `agent.ServerCertificate`) helm value. To include this in the installation command, add the following to the generated installation command: ```bash --set global.serverCertificate="" ``` You can also opt to use an existing secret containing your certificate under the `octopus-server-certificate.pem` key with the following addition: ```bash --set global.serverCertificateSecretName="octopus-server-certificate" ``` The referenced secret should take the form: ```yaml apiVersion: v1 kind: Secret metadata: name: octopus-server-certificate data: octopus-server-certificate.pem: "" ``` ### gRPC certificates When installing the Kubernetes monitor, you will possibly encounter the same certificate issues for the gRPC communcations as you do for the Octopus server certificate. Depending on your load balancer configuration, you have several options for how to handle this. When using TLS/SSL passthrough, no additional configuration is required. The Kubernetes monitor will use the self signed certificate generated by Octopus server automatically. When using TLS/SSL bridging, the self-signed certificate or root organization CA certificate will need to be provided to the Helm command. This can be the same certificate as your HTTPS certificate, but it does not need to be. It must match the certificate configured in your load balancer. To include this in the installation command, add the following to the generated installation command: ```bash --set kubernetesMonitor.monitor.customCaCertificate=" ``` [See here](/docs/installation/load-balancers/use-nginx-as-reverse-proxy#grpc-communications) for some sample load balancer configurations for these configurations. ## Agent tooling For all Kubernetes steps, except the `Run a kubectl script` step, the agent uses the `octopusdeploy/kubernetes-agent-tools-base` default container image to execute it's workloads. It will correctly select and pull the version of the image that's specific to the cluster's version. For the `Run a kubectl script` step, if there is a [container image](/docs/projects/steps/execution-containers-for-workers) defined in the step, then that container image is used. If one is not specified, the default container image is used. To override these automatically resolved tooling images, you can set the helm chart values of `scriptPods.worker.image.repository` and `scriptPods.worker.image.tag` for the agent running as a worker, or `scriptPods.deploymentTarget.image` and `scriptPods.deploymentTarget.tag` when running the agent as a deployment target. :::div{.warning} In Octopus Server versions prior to `2024.3.7669`, the Kubernetes agent erroneously used container images defined in *all* Kubernetes steps, not just the `Run a kubectl script` step. ::: This image contains the minimum required tooling to run Kubernetes workloads for Octopus Deploy, namely: - `kubectl` - `helm` - `powershell` ## Modifying the Helm installation The Helm installation of the Kubernetes agent can be modified after installation if changes to the installed values need to be made. To help facilitate this, there is a dialog that can be launched that gives a pre-populated Helm command where additional values can be set. It is launched from the contedt menu on the Kubernetes agent settings page which can be navigated to **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings**. :::figure ![Kubernetes Agent Modify via Helm context menu item](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-modify-via-helm-context-menu.png) ::: Once launched, it presents a dialog prefilled with the correct namespace and helm release name. From there, this command can be modified, copied and executed in a terminal connected to the cluster. :::figure ![Kubernetes Agent Modify via Helm dialog](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-modify-via-helm-dialog.png) ::: ## Upgrading the Kubernetes agent The Kubernetes agent can be upgraded automatically by Octopus Server, manually in the Octopus portal or via a `helm` command. ### Automatic updates :::div{.hint} Automatic updating was added in 2024.2.8584 ::: By default, the Kubernetes agent is automatically updated by Octopus Server when a new version is released. These version checks typically occur after a health check. When an update is required, Octopus will start a task to update the agent to the latest version. This behavior is controlled by the [Machine Policy](/docs/infrastructure/deployment-targets/machine-policies) associated with the agent. You can change this behavior to **Manually** in the [Machine policy settings](/docs/infrastructure/deployment-targets/machine-policies#configure-machine-updates). ### Manual updating via Octopus portal To check if a Kubernetes agent can be manually upgraded, navigate to the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page. If the agent can be upgraded, there will be an *Upgrade available* banner. Clicking **Upgrade to latest** button will trigger the upgrade via a new task. If the upgrade fails, the previous version of the agent is restored. :::figure ![Kubernetes Agent updated interface](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-upgrade-portal.png) ::: ### Helm upgrade command To upgrade a Kubernetes agent via `helm`, note the following fields from the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page: - Helm Release Name - Namespace Then, from a terminal connected to the cluster containing the instance, execute the following command: ```bash helm upgrade --atomic --namespace NAMESPACE HELM_RELEASE_NAME oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` :::div{.hint} Replace NAMESPACE and HELM_RELEASE_NAME with the values noted ::: If after the upgrade command has executed, you find that there is issues with the agent, you can rollback to the previous helm release by executing: ```bash helm rollback --namespace NAMESPACE HELM_RELEASE_NAME ``` ## Uninstalling the Kubernetes agent To fully remove the Kubernetes agent, you need to delete the agent from the Kubernetes cluster as well as delete the deployment target from Octopus Deploy The deployment target deletion confirmation dialog will provide you with the commands to delete the agent from the cluster.Once these have been successfully executed, you can then click **Delete** and delete the deployment target. :::figure ![Kubernetes Agent delete dialog](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-delete-dialog.png) ::: # Upgrading the Agent Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/upgrading.md The Kubernetes agent is automatically kept up to date by Octopus Server when running periodic health checks. ## Disabling automatic upgrades Automatic upgrades can be disabled by updating the machine updates settings in your applied [machine policy](/docs/infrastructure/deployment-targets/machine-policies) ## When do we new major versions Changes to the Kubernetes agent Helm Chart necessitated a breaking change. To make this clear, we perform a major version increase. The version of a Kubernetes agent is found by going to **Infrastructure** then into **DeploymentTargets**; from there click on the **Kubernetes agent** of interest; on its **Connectivity** sub-page you will see 'Current Version'. :::figure ![Kubernetes agent default namespace](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-upgrade-version.png) ::: ## V1 Installed v1 instances will continue to operate as expected, however they will receive no further updates other than security updates. While you may continue to use v1 of the helm-chart, it is highly recommended to perform an upgrade to v2 to you receive ongoing functional and security updates. As of Octopus Server 2024.4, version 1 Helm charts can be automatically upgraded to version 2 without manual intervention. For older versions of Octopus Server you can manually upgrade a v1 instance following the guide in the Kubernetes agent [documentation](https://github.com/OctopusDeploy/helm-charts/blob/main/charts/kubernetes-agent/migrations). Alternatively, existing v1 Kubernetes agents can be deleted from your server instance, and recreated as v2 agents via the installation workflow available in Octopus Server. # Support Policy for Kubernetes Versions Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/supported-versions-policy.md [The Kubernetes project](https://kubernetes.io/releases/version-skew-policy/#supported-versions) maintains release branches for the most recent three minor releases of Kubernetes. Octopus aims to follow this support policy as closely as makes sense. ## Kubernetes Agent The Kubernetes agent uses the [Kubernetes C# client](https://github.com/kubernetes-client/csharp) to interact with the Kubernetes API, so we are bound to their release cadence. The Kubernetes agent will receive an update within **3 months** of a new major release of the Kubernetes C# client being released. If your use case requires the latest and greatest version of Kubernetes before we have released a new version, please [contact support](https://octopus.com/company/contact) to discuss options. ### Support for older versions Each time the Kubernetes agent is updated to support a new version of Kubernetes, support for older versions will be dropped in line with the Kubernetes project's supported versions. Historically, the APIs in use have been stable and the Kubernetes agent has remained compatible with older version of Kubernetes, however we can make no guarantees of this in the future. We strongly recommend leaving automatic Kubernetes agent updates enabled and keeping your Kubernetes cluster up to date in line with the latest support version. If you must maintain support for older versions, you can configure how the Kubernetes agent is automatically updated with [machine policies](/docs/infrastructure/deployment-targets/machine-policies#configure-machine-updates). # Kubernetes API Source: https://octopus.com/docs/kubernetes/targets/kubernetes-api.md Kubernetes API targets are used by the [Kubernetes steps](/docs/deployments/kubernetes) to define the context in which deployments and scripts are run. Conceptually, a Kubernetes API target represent a permission boundary and an endpoint. Kubernetes [permissions](https://oc.to/KubernetesRBAC) and [quotas](https://oc.to/KubernetesQuotas) are defined against a namespace, and both the account and namespace are captured as a Kubernetes API target, along with the cluster endpoint URL. A namespace is required when registering the Kubernetes API target with Octopus Deploy. By default, the namespace used in the registration is used in health checks and deployments. The namespace can be overwritten in the deployment process. :::div{.hint} From **Octopus 2022.2**, AKS target discovery has been added to the Kubernetes Target Discovery Early Access Preview and is enabled via **Configuration ➜ Features**. From **Octopus 2022.3** will include EKS cluster support. ::: ## Discovering Kubernetes targets Octopus can discover Kubernetes API targets in *Azure Kubernetes Service* (AKS) or *Amazon Elastic Container Service for Kubernetes* (EKS) as part of your deployment using tags on your AKS or EKS resource. :::div{.hint} From **Octopus 2022.3**, you can configure the well-known variables used to discover Kubernetes targets when editing your deployment process in the Web Portal. See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. ::: To discover targets use the following steps: - Add an Azure account variable named **Octopus.Azure.Account** or the appropriate AWS authentication variables ([more info here](/docs/infrastructure/deployment-targets/cloud-target-discovery/#aws)) to your project. - [Add cloud resource template tags](/docs/infrastructure/deployment-targets/cloud-target-discovery/#tag-cloud-resources) to your cluster so that Octopus can match it to your deployment step and environment. - Add any of the Kubernetes built-in steps to your deployment process. During deployment, the target tag on the step will be used along with the environment being deployed to, to discover Kubernetes targets to deploy to. Kubernetes targets discovered will not have a namespace set, the namespace on the step will be used during deployment (or the default namespace in the cluster if no namespace is set on the step). See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. ## A sample config file The YAML file below shows a sample **kubectl** configuration file. Existing Kubernetes users will likely have a similar configuration file. A number of the fields in this configuration file map directly to the fields in an Octopus Kubernetes API target, as noted in the next section. ```yaml apiVersion: v1 clusters: - cluster: certificate-authority-data: XXXXXXXXXXXXXXXX... server: https://kubernetes.example.org:443 name: k8s-cluster contexts: - context: cluster: k8s-cluster user: k8s_user name: k8s_user current-context: k8s-cluster kind: Config preferences: {} users: - name: k8s_user user: client-certificate-data: XXXXXXXXXXXXXXXX... client-key-data: XXXXXXXXXXXXXXXX... token: 1234567890xxxxxxxxxxxxx - name: k8s_user2 user: password: some-password username: exp - name: k8s_user3 user: token: 1234567890xxxxxxxxxxxxx ``` ## Add a Kubernetes target 1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target**. 2. Select **KUBERNETES** and click **ADD** on the Kubernetes API card. 3. Enter a display name for the Kubernetes API target. 4. Select at least one [environment](/docs/infrastructure/environments) for the target. 5. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. 6. Select the authentication method. Kubernetes targets support multiple [account types](https://oc.to/KubernetesAuthentication): - **Usernames/Password**: In the example YAML above, the username is found in the `username` field, and the password is found in the `password` field. These values can be added as an Octopus [Username and Password](/docs/infrastructure/accounts/username-and-password) account. - **Tokens**: In the example YAML above, the token is defined in the `token` field. This value can be added as an Octopus [Token](/docs/infrastructure/accounts/tokens) account. - **Azure Service Principal**: When using an AKS cluster, [Azure Service Principal accounts](/docs/infrastructure/accounts/azure) allow Azure Active Directory accounts to be used. The Azure Service Principal is only used with AKS clusters. To log into ACS or ACS-Engine clusters, standard Kubernetes credentials like certificates or service account tokens must be used. :::div{.hint} From Kubernetes 1.26, [the default azure auth plugin has been removed from kubectl](https://github.com/kubernetes/kubernetes/blob/ad18954259eae3db51bac2274ed4ca7304b923c4/CHANGELOG/CHANGELOG-1.26.md#deprecation) so clusters targeting Kubernetes 1.26+ that have [Local Account Access disabled](https://oc.to/AKSDisableLocalAccount) in Azure, will require the worker or execution container to have access to the [kubelogin](https://oc.to/Kubelogin) CLI tool, as well as the Octopus Deployment Target setting **Login with administrator credentials** disabled. This requires **Octopus 2023.3*. If Local Account access is enabled on the AKS cluster, the Octopus Deployment Target setting Login with administrator credentials will also need to be enabled so that the Local Accounts are used instead of the default auth plugin. ::: - **AWS Account**: When using an EKS cluster, [AWS accounts](/docs/infrastructure/accounts/aws) allow IAM accounts and roles to be used. The interaction between AWS IAM and Kubernetes Role Based Access Control (RBAC) can be tricky. We highly recommend reading the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html). :::div{.hint} **Common issues:** From **Octopus 2022.4**, you can use the `aws cli` to authenticate to an EKS cluster, earlier versions rely on the `aws-iam-authenticator`. If using the AWS account type, the Octopus Server or worker must have either the `aws cli` (1.16.156 or later) or `aws-iam-authenticator` executable on the path. If both are present the `aws cli` will be used. The EKS api version is selected based on the kubectl version. For Octopus 2022.3 and earlier `kubectl` `1.23.6` and `aws-iam-authenticator` version `0.5.3` or earlier must be used, these target `v1alpha1` endpoints. For `kubectl` `1.24.0` and later `v1beta1` endpoints are used and versions `0.5.5` and later of the `aws-iam-authenticator` are required. See the [AWS documentation](https://oc.to/AWSEKSKubectl) for download links. The error `You must be logged into the server (the server has asked for the client to provide credentials)` generally indicates the AWS account does not have permissions in the Kubernetes cluster. When you create an Amazon EKS cluster, the IAM entity user or role that creates the cluster is automatically granted `system:master` permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the `aws-auth` ConfigMap within Kubernetes. See the [Managing Users or IAM Roles for your Cluster](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html). ::: - **Google Cloud Account**: When using a GKE cluster, [Google Cloud accounts](/docs/infrastructure/accounts/google-cloud) allow you to authenticate using a Google Cloud IAM service account. :::div{.hint} From `kubectl` version `1.26`, authentication against a GKE cluster [requires an additional plugin called `gke-cloud-auth-plugin` to be available](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke) on the PATH where your step is executing. If you manage your own execution environment (eg self-hosted workers, custom execution containers etc), you will need to ensure the auth plugin is available alongside `kubectl` ::: - **Client Certificate**: When authenticating with certificates, both the certificate and private key must be provided. In the example YAML above, the `client-certificate-data` field is a base 64 encoded certificate, and the `client-key-data` field is a base 64 encoded private key (both have been truncated for readability in this example). The certificate and private key can be combined and saved in a single pfx file. The script below accepts the base 64 encoded certificate and private key and uses the [Windows OpenSSL binary from Shining Light Productions](https://oc.to/OpenSSLWindows) to save them in a single pfx file. ```powershell param ( [Parameter(Mandatory = $true)] [string]$Certificate, [Parameter(Mandatory = $true)] [string]$PrivateKey ) [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($Certificate)) | ` Set-Content -Path certificate.crt [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($PrivateKey)) | ` Set-Content -Path private.key C:\OpenSSL-Win32\bin\openssl pkcs12 ` -passout pass: ` -export ` -out certificate_and_key.pfx ` -in certificate.crt ` -inkey private.key ``` ```bash #!/bin/bash echo $1 | base64 --decode > certificate.crt echo $2 | base64 --decode > private.key openssl pkcs12 \ -passout pass: \ -export \ -out certificate_and_key.pfx \ -in certificate.crt \ -inkey private.key ``` This file can then be uploaded to the [Octopus certificate management area](/docs/deployments/certificates), after which, it will be made available to the Kubernetes target. The Certificates Library can be accessed via **Deploy ➜ Manage ➜ Certificates**. 7. Enter the Kubernetes cluster URL. Each Kubernetes target requires the cluster URL, which is defined in the `Kubernetes cluster URL` field. In the example YAML about, this is defined in the `server` field. 8. Optionally, select the certificate authority if you've added one. Kubernetes clusters are often protected with self-signed certificates. In the YAML example above the certificate is saved as a base 64 encoded string in the `certificate-authority-data` field. To communicate with a Kubernetes cluster with a self-signed certificate over HTTPS, you can either select the **Skip TLS verification** option, or supply the certificate in `The optional cluster certificate authority` field. Decoding the `certificate-authority-data` field results in a string that looks something like this (the example has been truncated for readability): ```text -----BEGIN CERTIFICATE----- XXXXXXXXXXXXXXXX... -----END CERTIFICATE----- ``` Save this text to a file called `ca.pem`, and upload it to the [Octopus certificate management area](https://oc.to/CertificatesDocumentation). The certificate can then be selected in the `cluster certificate authority` field. 9. Enter the Kubernetes Namespace. When a single Kubernetes cluster is shared across environments, resources deployed to the cluster will often be separated by environment and by application, team, or service. In this situation, the recommended approach is to create a namespace for each application and environment (e.g., `my-application-development` and `my-application-production`), and create a Kubernetes service account that has permissions to just that namespace. Where each environment has its own Kubernetes cluster, namespaces can be assigned to each application, team or service (e.g. `my-application`). In both scenarios, a target is then created for each Kubernetes cluster and namespace. The `Target Role` tag is set to the application name (e.g. `my-application`), and the `Environments` are set to the matching environment. When a Kubernetes target is used, the namespace it references is created automatically if it does not already exist. 10. Select a worker pool for the target. To make use of the Kubernetes steps, the Octopus Server or workers that will run the steps need to have the `kubectl` executable installed. Linux workers also need to have the `jq`, `xargs` and `base64` applications installed. 11. Click **SAVE**. :::div{.warning} Setting the Worker Pool in a Deployment Process will override the Worker Pool defined directly on the Deployment Target. ::: ## Create service accounts The recommended approach to configuring a Kubernetes target is to have a service account for each application and namespace. In the example below, a service account called `jenkins-development` is created to represent the deployment of an application called `jenkins` to an environment called `development`. This service account has permissions to perform all operations (i.e. `get`, `list`, `watch`, `create`, `update`, `patch`, `delete`) on the resources created by the `Deploy kubernetes containers` step (i.e. `deployments`, `replicasets`, `pods`, `services`, `ingresses`, `secrets`, `configmaps`). ```yaml --- kind: Namespace apiVersion: v1 metadata: name: jenkins-development --- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins-deployer namespace: jenkins-development --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: jenkins-development name: jenkins-deployer-role rules: - apiGroups: ["", "extensions", "apps"] resources: ["deployments", "replicasets", "pods", "services", "ingresses", "secrets", "configmaps"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: ["namespaces"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: jenkins-deployer-binding namespace: jenkins-development subjects: - kind: ServiceAccount name: jenkins-deployer apiGroup: "" roleRef: kind: Role name: jenkins-deployer-role apiGroup: "" ``` In cases where it is necessary to have an administrative service account created (for example, when using AWS EKS because the initial admin account is tied to an IAM role), the following YAML can be used. ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: octopus-administrator namespace: default --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: octopus-administrator-binding namespace: default subjects: - kind: ServiceAccount name: octopus-administrator namespace: default apiGroup: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin ``` Creating service accounts automatically results in a token being generated. The PowerShell snippet below returns the token for the `jenkins-deployer` account. ```powershell $user="jenkins-deployer" $namespace="jenkins-development" $data = kubectl get secret $(kubectl get serviceaccount $user -o jsonpath="{.secrets[0].name}" --namespace=$namespace) -o jsonpath="{.data.token}" --namespace=$namespace [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($data)) ``` This bash snippet also returns the token value. ```bash kubectl get secret $(kubectl get serviceaccount jenkins-deployer -o jsonpath="{.secrets[0].name}" --namespace=jenkins-development) -o jsonpath="{.data.token}" --namespace=jenkins-development | base64 --decode ``` The token can then be saved as a Token Octopus account, and assigned to the Kubernetes target. :::div{.warning} Kubernetes versions 1.24+ no longer automatically create tokens for service accounts and they need to be manually created using the **create token** command: ```bash kubectl create token jenkins-deployer ``` From Kubernetes version 1.29, a warning will be displayed when using automatically created Tokens. Make sure to rotate any Octopus Token Accounts to use manually created tokens via **create token** instead. ::: ## Kubectl Kubernetes targets use the `kubectl` executable to communicate with the Kubernetes cluster. This executable must be available on the path on the target where the step is run. When using workers, this means the `kubectl` executable must be in the path on the worker that is executing the step. Otherwise, the `kubectl` executable must be in the path on the Octopus Server itself. ## Vendor Authentication Plugins {#vendor-authentication-plugins} Prior to `kubectl` version 1.26, the logic for authenticating against various cloud providers (eg Azure Kubernetes Services, Google Kubernetes Engine) was included "in-tree" in `kubectl`. From version 1.26 onward, the cloud-vendor specific authentication code has been removed from `kubectl`, in favor of a plugin approach. What this means for your deployments: - Amazon Elastic Container Services (ECS): No change required. Octopus already supports using either the AWS CLI or the `aws-iam-authenticator` plugin. - Azure Kubernetes Services (AKS): No change required. The way Octopus authenticates against AKS clusters never used the in-tree Azure authentication code, and will continue to function as normal. - From **Octopus 2023.3**, you will need to ensure that the [kubelogin](https://oc.to/Kubelogin) CLI tool is also available if you have disabled local Kubernetes accounts. - Google Kubernetes Engine (GKE): If you upgrade to `kubectl` 1.26 or higher, you will need to ensure that the `gke-gcloud-auth-plugin` tool is also available. More information can be found on [Google's announcement about this change](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). ## Helm When a Kubernetes target is used with a Helm step, the `helm` executable must be on the target where the step is run. ## Dynamic targets Kubernetes targets can be created dynamically at deploy time with the PowerShell function `New-OctopusKubernetesTarget`. See [Create Kubernetes Target Command](/docs/infrastructure/deployment-targets/dynamic-infrastructure/kubernetes-target) for more information. ## Troubleshooting If you're running into issues with your Kubernetes targets, it's possible you'll be able to resolve the issue using some of these troubleshooting tips. If this section doesn't help, please [get in touch](https://octopus.com/support). ### Debugging Setting the Octopus variable `Octopus.Action.Kubernetes.OutputKubeConfig` to `True` for any deployment or runbook using a Kubernetes target will cause the generated kube config file to be printed into the logs (with passwords masked). This can be used to verify the configuration file used to connect to the Kubernetes cluster. By default, successful output from Kubernetes CLI tools (`kubectl`, `helm`, `aws`, `az`, `gcloud`, etc.) is logged at the Verbose level, which is only visible when the task log level is set to Verbose. Setting the Octopus variable `Octopus.Action.Kubernetes.LogCliOutputAsInfo` to `True` will promote this output to the Info level so it appears in the Standard task log. This is useful when debugging deployments to see the full output of these tools without needing to switch the log level to Verbose for the entire deployment. If Kubernetes targets fail their health checks, the best way to diagnose the issue to to run a `Run a kubectl CLI Script` step with a script that can inspect the various settings that must be in place for a Kubernetes target to function correctly. Octopus deployments will run against unhealthy targets by default, so the fact that the target failed its health check does not prevent these kinds of debugging steps from running. An example script for debugging a Kubernetes target is shown below: ```powershell $ErrorActionPreference = 'SilentlyContinue' # The details of the AWS Account. This will be populated for EKS clusters using the AWS authentication scheme. # AWS_SECRET_ACCESS_KEY will be redacted, but that means it was populated successfully. Write-Host "Getting the AWS user" Write-Host "AWS_ACCESS_KEY_ID: $($env:AWS_ACCESS_KEY_ID)" Write-Host "AWS_SECRET_ACCESS_KEY: $($env:AWS_SECRET_ACCESS_KEY)" # The details of the Azure Account. This will be populated for an AKS cluster using the Azure authentication scheme. Write-Host "Getting the Azure user" cat azure-cli/azureProfile.json # View the generated config. kubectl will redact any secrets from this output. Write-Host "kubectl config view" kubectl config view # View the environment variable that defines the kube config path Write-Host "KUBECONFIG is $($env:KUBECONFIG)" # Save kube config as artifact (will expose credentials in log). This is useful to take the generated config file # and run it outside of octopus. # New-OctopusArtifact $env:KUBECONFIG # List any proxies. Failure to connect to the cluster when a proxy is configured may be caused by the proxy. Write-Host "HTTP_PROXY: $($env:HTTP_PROXY)" Write-Host "HTTPS_PROXY: $($env:HTTPS_PROXY)" Write-Host "NO_PROXY: $($env:NO_PROXY)" # Execute the same command that the target health check runs. Write-Host "Simulating a health check" kubectl version --client --output=yaml # Write a custom kube config. This is useful when you have a config that works, and you want to confirm it works in Octopus. Write-Host "Health check with custom config file" Set-Content -Path "my-config.yml" -Value @" apiVersion: v1 clusters: - cluster: certificate-authority-data: ca-cert-goes-here server: https://myk8scluster name: test contexts: - context: cluster: test user: test_admin name: test_admin - context: cluster: test user: test name: test current-context: test kind: Config preferences: {} users: - name: test_admin user: token: auth-token-goes-here - name: test user: client-certificate-data: certificate-data-goes-here client-key-data: certificate-key-goes-here "@ kubectl version --short --kubeconfig my-config.yml exit 0 ``` ### API calls failing If you are finding that certain API calls are failing, for example `https://your-octopus-url/api/users/Users-1/apikeys?take=2147483647`, it's possible that your WAF is blocking the traffic. To confirm this you should investigate your WAF logs to determine why the API call is being blocked and make the necessary adjustments to your WAF rules. ## Learn more - [Kubernetes Deployment](/docs/deployments/kubernetes) - [Kubernetes blog posts](https://octopus.com/blog/tag/kubernetes/1) # Argo CD Live Object Status Source: https://octopus.com/docs/argo-cd/live-object-status.md Argo Live Object Status shows the live status of the Kubernetes resources deployed by the Argo CD Applications mapped to the project. This lets you monitor and safely troubleshoot Argo CD controlled deployments from within Octopus Deploy :::div{.info} Outwardly this is an identical capability to that available for [Kubernetes based projects](/docs/kubernetes/live-object-status). However when used with Argo CD, neither the [Kubernetes agent](/docs/kubernetes/targets/kubernetes-agent) nor the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) are required. ::: ## Where it is available Using Argo CD Live Object Status requires the following: - Octopus Deploy 2025.4+ (Sync Status requires 2026.1+) - A registered [Argo CD Instance](/docs/argo-cd/instances/) - [Annotations](/docs/argo-cd/annotations) on your Argo CD Applications, mapping them onto Octopus Deploy projects - A deployment process containing an Argo CD step (either [Update Argo CD Image Tags](/docs/argo-cd/steps/update-application-image-tags) or [Update Argo CD Application Manifests](/docs/argo-cd/steps/update-application-manifests)) ## How to use Live Status Once the prerequisites have been fulfilled, toggle the switch on the dashboard to show live status in place of deployment status. :::figure ![Octopus Argo CD Live Status Dashboard](/docs/img/argo-cd/argo-cd-live-status-dashboard.png) ::: Octopus populates the Live Status Table with content taken directly from Argo. :::figure ![Octopus Argo CD Live Status Objects](/docs/img/argo-cd/argo-cd-live-status-objects.png) ::: ### Project Health Status The project health status is a roll-up of the health of all objects: | Label | Status Icon | Description | | :---------- | :------------------------------------------------------------------------------: |:-------------------------------------------------------------------------------------------------------------------------| | Progressing | | One or more objects of the mapped application are in a progressing state | | Healthy | | The objects in the cluster match that specified in the applications’ source git repositories, and are executing correctly| | Unknown | | We’re having trouble getting live status updates for this application | | Degraded | | Your objects experienced errors after the deployment completed | | Missing | | One or more desired objects are missing from the cluster | | Unavailable | | Application live status is unavailable because your last deployment failed | | Waiting | | Application live status will be available once the deployment completes | | Stale | stale icon | Status information is stale. No data has been received in the last 10 minutes | ### Project Sync Status Sync Status tracks whether the changes Octopus pushed to git still match what Argo CD has synced. Octopus recalculates this after each deployment and whenever Argo CD reports a sync event. | Label | Status Icon | Description | | :---------- | :------------------------------------------------: |:-----------------------------------------------------------------------------------------------------------------------------| | In Sync | | Argo CD reports the application is synced and the git configuration still matches what Octopus last applied | | Out of Sync | | Argo CD has detected that the desired state in the cluster differs from the application’s git repository | | Git Drift | | Octopus has detected that the changes it applied to git have been modified since the last deployment (e.g. by a manual edit) | | Unknown | | We’re having trouble getting sync status updates for this application | | Unavailable | | Application sync status is unavailable because your last deployment failed | | Waiting | | Application sync status will be available once the deployment completes | ### Object Health Status | Label | Status Icon | Description | | :---------- | :------------------------------------------------------------------------------: |:------------------------------------------------------------------------------| | Progressing | | Object is attempting to reach the desired state | | Healthy | | Object is in sync and reporting that it is running as expected | | Unknown | | We don't have information about the live status of this object | | Degraded | | Object has run into a problem, check the logs or events to find out more | | Missing | | Object is missing from the cluster | | Suspended | | Job is not currently running | | Stale | stale icon | Status information is stale. No data has been received in the last 10 minutes | ### Object Sync Status | Label | Status Icon | Description | | :---------- | :-------------------------------------------: |:-----------------------------------------------------------------------------------------------------------------------------| | In Sync | | Argo CD reports the object is synced and the git configuration still matches what Octopus last applied | | Out of Sync | | Argo CD has detected that the desired state in the cluster differs from the application’s git repository. | | Git Drift | | Octopus has detected that the changes it applied to git have been modified since the last deployment (e.g. by a manual edit) | | Unknown | | We don't have information about the live status of this object | ### Detailed object information Selecting an object or application name in the table will open a drawer containing detailed information. The drawer contains up-to-date information regarding the selected object: - Summary - Events - Logs - Kubernetes YAML manifest For Argo CD, all of these data fields are fetched on demand from your Argo instance. Tailing logs is not currently supported. #### Manifest Diffs Octopus presents manifest diffs in the *opposite order* to that shown in Argo. In Argo, the left panel shows the live manifest in the cluster, and the right-panel shows the manifest that will be deployed when the application/resource is synced. In Octopus, the left panel indicates "what was most recently written to the git repository", while the right shows the live manifest. | | Left | Right | |---------|--------------------------------------------------------------|-------------------------------------------------------------------| | Octopus | Manifest written to git repository as part of last release | The live manifest in the cluster | | Argo CD | The live manifest in the cluster | The manifest in the git repository, which will be applied on sync | As an example, In the following images, the date of deployment was updated in a configmap by an Octopus deployment. :::figure ![ArgoCD Diff View](/docs/img/argo-cd/argo-cd-diff.png) ::: The same change, in Octopus will appear - note how the changes appear in opposite columns. :::figure ![Octopus Diff View](/docs/img/argo-cd/octopus-argo-cd-diff.png) ::: # Kubernetes Live Object Status Source: https://octopus.com/docs/kubernetes/live-object-status.md Kubernetes Live Object Status shows the live status of your Kubernetes objects after they have been deployed. This allows you to monitor and safely troubleshoot your Kubernetes application directly from within Octopus Deploy. :::figure ![Live status page](/docs/img/kubernetes/live-object-status/live-status-page.png) ::: ## Where it is available Using Kubernetes Live Object Status requires the following: - Octopus Deploy 2025.3+ - A [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent) target - A project with a deployment process containing Kubernetes steps ## How to use Live Status Once you have the Kubernetes monitor enabled on your [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent), simply toggle the switch on the dashboard to show live status in place of the deployment status. :::figure ![A screenshot of the Space dashboard showing live status](/docs/img/kubernetes/live-object-status/live-status-space-dashboard.png) ::: Octopus display individual status at an object level as well as summarized status for an Application. :::figure ![Live status page](/docs/img/kubernetes/live-object-status/live-status-page.png) ::: ### Application Health Status The application health status is a roll-up of the health of all objects: | Label | Status Icon | Description | | :---------- | :-------------------------------------------------------------------------------: | :---------------------------------------------------------------------------- | | Progressing | | Objects in your application are currently in a progressing state | | Healthy | | The objects in your cluster match what was specified in the last deployment | | Unknown | | We're having trouble getting live status updates for this application | | Degraded | | Your objects experienced errors after the deployment completed | | Missing | | Objects in your application are currently in a missing state | | Unavailable | | Application live status is unavailable because your last deployment failed | | Waiting | | Application live status will be available once the deployment completes | | Stale | stale icon. | Status information is stale. No data has been received in the last 10 minutes | ### Application Sync Status Sync Status tracks whether the changes Octopus deployed still matches the resources in the cluster. | Label | Status Icon | Description | | :---------- | :------------------------------------------------: | :------------------------------------------------------------------------- | | In Sync | | The objects on your cluster match what you last deployed | | Out of Sync | | The objects on your cluster no longer match what you last deployed | | Unknown | | We’re having trouble getting sync status updates for this application | | Unavailable | | Application sync status is unavailable because your last deployment failed | | Waiting | | Application sync status will be available once the deployment completes | ### Object Health Status | Label | Status Icon | Description | | :---------- | :------------------------------------------------------------------------------: | :---------------------------------------------------------------------------- | | Progressing | | Object is attempting to reach the desired state | | Healthy | | Object is in sync and reporting that it is running as expected | | Unknown | | We don't have information about the live status of this object | | Degraded | | Object has run into a problem, check the logs or events to find out more | | Missing | | Object is missing from the cluster | | Suspended | | Job is not currently running | | Stale | stale icon | Status information is stale. No data has been received in the last 10 minutes | ### Object Sync Status | Label | Status Icon | Description | | :---------- | :-----------------------------------------: | :------------------------------------------------------------- | | In Sync | | Object manifest matches what was applied | | Out of Sync | | Object manifest is not the same as what was applied | | Unknown | | We don't have information about the live status of this object | Take a look at our [troubleshooting guide](/docs/kubernetes/live-object-status/troubleshooting) for details on why you may see some object statuses ### Detailed object information Each object reported back by the Kubernetes monitor can be selected to provide detailed information including events, logs and the manifest currently on the cluster :::figure ![Object summary](/docs/img/kubernetes/live-object-status/live-status-drawer-summary.png) ::: #### Events Events are fetched on demand from the running object. Octopus reads and presents events in similar way to `kubectl`. :::figure ![Object events](/docs/img/kubernetes/live-object-status/live-status-drawer-events.png) ::: #### Logs Logs are fetched on demand from the running object. We do not currently support tailing logs, but it is on the roadmap in the near future. :::figure ![Object logs](/docs/img/kubernetes/live-object-status/live-status-drawer-logs.png) ::: #### Manifests The first manifest shown is the live manifest as reported by the cluster back to Octopus. When viewing an object that has been applied to your cluster, you are able to view the applied manifest and see any differences between them using the controls at the top of the drawer. :::figure ![Object manifest](/docs/img/kubernetes/live-object-status/live-status-drawer-manifest.png) ::: ##### Diffs When the show diff toggle is enabled, we compare the live manifest that we expect to see on the left, with what the cluster is reporting on the right. Read about [applied manifest diffs](/docs/kubernetes/deployment-verification/applied-manifests/diffs) for more details on how to interpret the diff viewer. :::figure ![Object manifest diffs](/docs/img/kubernetes/live-object-status/live-status-drawer-manifest-diffs.png) ::: ## How it works The Kubernetes Agent has a new component called the Kubernetes monitor which also runs inside the Kubernetes cluster. Read more about the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) here. During a deployment, Octopus will capture any applied Kubernetes manifests and send them to the monitor. The monitor uses these manifests to track the deployed objects in the cluster, keeping track of their synchronization and health. ### Script steps The built in Kubernetes steps will automatically report the applied manifests for deployments, however Octopus needs a bit of help when you're making changes using kubectl script steps. To notify Octopus which Kubernetes resources you want tracked, we have bash and powershell helper functions available to use. You can choose between passing the manifest as a variable, or passing the file path directly instead. Only the "Run a kubectl script" step will correctly report manifests, regular "Run a script" steps are not supported. :::div{.info} You still need to apply the Kubernetes manifests to your cluster. These functions only notify Octopus that you expect the resources to be created. ::: Available in Octopus server versions: - 2025.4.10333+ - 2026.1.4557+ #### Bash ```bash read -r -d '' manifest << EOM apiVersion: v1 kind: Namespace metadata: name: "example" labels: name: "example" EOM report_kubernetes_manifest "$manifest" ``` ```bash report_kubernetes_manifest_file "$ManifestFilePath" ``` #### Powershell ```powershell $manifest = @" apiVersion: v1 kind: Namespace metadata: name: "example" labels: name: "example" "@ Report-KubernetesManifest -manifest $manifest ``` ```powershell Report-KubernetesManifestFile -path $ManifestFilePath ``` ## User permissions Viewing the data returned from the Kubernetes monitor from within Octopus requires `DeploymentView` permissions. This data includes the resource and application status, as well as pod logs and events for objects being monitored. This may be a change in security posture that your team should carefully consider. ## Secrets ### Octopus sensitive variables As always, we treat secret data as carefully as we possibly can. Practically, this means that we redact any detected Octopus sensitive variables from: - Manifests - Logs - Events If we do not have all the required information to adequately redact something coming back from a Kubernetes cluster, we will opt to prevent the user from seeing this all together. With that said, we highly recommend: 1. Containing all sensitive information to Kubernetes secrets so they can be redacted at the source 2. Never logging sensitive values in containers The flexibility that Octopus variables provide mean that sensitive variables can turn up in unexpected ways and so our redaction can only be best effort. ### Kubernetes secrets The well-defined structure of Kubernetes secrets allow us to confidently redact secret data. To ensure that we never exfiltrate secret data that Octopus is not privy to, the Kubernetes monitor salts and hashes the secret data using sha256. By hashing secrets Octopus can tell you when something changed in your secret, but Octopus will never know what the secrets are unless you have populated them using Octopus sensitive variables. Please be aware that outputting Kubernetes secrets into pod logs may result in them being sent un-redacted if they are not sourced from Octopus sensitive variables originally. ## Configuration ### Prioritize health status on dashboards There can be [many reasons](/docs/kubernetes/live-object-status/troubleshooting#why-is-an-object-out-of-sync) that a particular object is marked as out of sync, some of these are not critical to the day to day operations of your application. In these cases, marking the entire application as out of sync on all dashboards may be more alarming than necessary. To counteract this, there is a project setting that will prioritize health statuses over the sync status of your application. When enabled, the sync status of objects will not be considered when calculating the application status. This setting defaults to on for all projects, but may change in the future. ## Known issues and limitations ### Excluded steps The desired object list is compiled from objects that were applied during the last deployment. If steps are excluded during a deployment, then live status will not be shown for objects that were applied in those steps. Please avoid skipping steps that deploy Kubernetes objects. ### Runbooks are not supported Objects modified by Runbooks are not monitored. Please deploy the objects via a Deployment if you want them to be monitored. # Load Balancers Source: https://octopus.com/docs/installation/load-balancers.md Octopus Deploy can work with any http/https load balancer technology. There are plenty of options when it comes to choosing a load balancer to direct user traffic between each of the Octopus Server nodes. ## Load Balancer Basics Octopus Server provides a health check endpoint for your load balancer to ping: `/api/octopusservernodes/ping`. :::figure ![Load balancer ping UI](/docs/img/shared-content/administration/images/load-balance-ping.png) ::: Making a standard `HTTP GET` request to this URL on your Octopus Server nodes will return: - HTTP Status Code `200 OK` as long as the Octopus Server node is online and not in drain mode. - HTTP Status Code `418 I'm a teapot` when the Octopus Server node is online, but it is currently in drain mode preparing for maintenance. - Anything else indicates the Octopus Server node is offline, or something has gone wrong with this node. :::div{.hint} The Octopus Server node configuration is also returned as JSON in the HTTP response body. ::: We typically recommend using a round-robin (or similar) approach for sharing traffic between the nodes in your cluster, as the Octopus Web Portal is stateless. All package uploads are sent as a POST to the REST API endpoint `/api/[SPACE-ID]/packages/raw`. Because the REST API will be behind a load balancer, you'll need to configure the following on the load balancer: - Timeout: Octopus is designed to handle 1 GB+ packages, which takes longer than the typical http/https timeout to upload. - Request Size: Octopus does not have a size limit on the request body for packages. Some load balancers only allow 2 or 3 MB files by default. ## Polling Tentacles Polling tentacles deserve special attention due to how they work with Octopus Deploy. Each node that processes tasks must be registered with every polling tentacle. We recommend having a dedicated URL for each node in the primary region and routing all traffic through a load balancer or a traffic manager. When you have to fail over to the secondary region, update the dedicated URLs to point to a corresponding node in the secondary region. :::div{.warning} Important! You must configure the traffic to be "pass through" with no SSL off-loading. The tentacles and Octopus Deploy establish a two-way trust via certificates. If a third unknown certificate is introduced, the tentacle and Octopus deploy will reject the connection. ::: ## gRPC Services Several Octopus features (eg. Kubernetes Live Object Status and Argo CD integration) rely on communications via gRPC that require specific configuration to account for certificate trust. Octopus generates a self signed certificate for gRPC communications. When the a gRPC client needs to connect to Octopus via a load balancer, there are two common methods to achieve this: 1. Using TLS/SSL bridging, with verification disabled between the load balancer and Octopus. Additional configuration will be required for the gRPC client to trust the load balancer certificate. 2. Using TLS/SSL passthrough ## Third Party Load Balancers This section contains information on how to set up third-party load balancers for use with Octopus High Availability: - Local Options - [Using NGINX as a reverse proxy with Octopus](/docs/installation/load-balancers/use-nginx-as-reverse-proxy) - [Using IIS as a reverse proxy with Octopus](/docs/installation/load-balancers/use-iis-as-reverse-proxy) - [Configuring Netscaler](/docs/installation/load-balancers/configuring-netscaler) - [AWS Load Balancers](/docs/installation/load-balancers/aws-load-balancers) - [Azure Load Balancers](/docs/installation/load-balancers/azure-load-balancers) - [GCP Load Balancers](/docs/installation/load-balancers/gcp-load-balancers) # Octopus Administration Source: https://octopus.com/docs/best-practices/octopus-administration.md This section covers our recommendations and implementation guides for anyone responsible and accountable for administrating Octopus Deploy for their organization. This section is applicable for Octopus Cloud and self-hosted Octopus Deploy (Octopus Server). The topics covered are: - [Users, roles, and teams](/docs/best-practices/octopus-administration/users-roles-and-teams) - [Partition Octopus with Spaces](/docs/best-practices/octopus-administration/partition-octopus-with-spaces) - [Offload work onto Workers](/docs/best-practices/octopus-administration/worker-configuration) - [Ongoing maintenance](/docs/best-practices/octopus-administration/ongoing-maintenance) # AI-Powered DevOps with Octopus Deploy Source: https://octopus.com/docs/octopus-ai.md Artificial intelligence, and GenAI technologies in particular, are transforming our technology landscape. For continuous delivery, AI allows us to solve previously-unsolvable problems. Parsing complex log files and diagnosing root cause failures and providing intelligent remediation; natural-language exploration of your software landscape, simplifying auditing, compliance, and standardization; agentic workflows providing intelligent glue between your essential software services. At Octopus, we are bringing these capabilities to the best continuous delivery tool on the market, lowering risk, improving efficiency, and accelerating your software delivery. Our AI capabilities span three complementary areas, each designed to address different aspects of your deployment lifecycle - from interactive assistance and troubleshooting to fully autonomous DevOps operations. ## Current Capabilities ### Octopus AI Assistant The Octopus AI Assistant is your intelligent companion to your existing Octopus Deploy instance, providing context-aware guidance and automation directly where you work. Currently in Early Access, this feature offers immediate value through four core capabilities that streamline your deployment workflows. The AI Assistant excels at providing instant tier-0 support by drawing from comprehensive Octopus Deploy documentation to answer questions in natural language. Whether you're exploring the difference between environments and tenants or learning how to set up automated deployments, the Assistant provides immediate, contextual answers without disrupting your workflow. Beyond answering questions, the AI Assistant can generate complete deployment projects from simple text descriptions. Describe what you want to deploy - whether it's an Azure Web App or AWS Lambda function - and the Assistant creates fully configured projects following established best practices. When deployments encounter issues, the Assistant's failure analysis capability examines logs, configurations, and error details to provide specific troubleshooting guidance and actionable resolution steps. The Assistant also helps monitor your Octopus Deploy instance for optimization opportunities, identifying unused variables, suggesting organizational improvements through tenant tags, and helping maintain healthy configurations that scale with your needs. [Learn more about Octopus AI Assistant](/docs/octopus-ai/assistant) ### Octopus MCP Server The Octopus MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) server represents a significant leap forward in AI integration capabilities. Built on Anthropic's open standard for connecting AI assistants to external data sources and tools, the MCP server will enable AI assistants like Claude to interact directly with your Octopus Deploy infrastructure. The Octopus MCP server provides similar capabilities to the Octopus AI Assistant, but provides further benefits: - You can use it with your client and model of choice. - It can work alongside other MCP servers to accomplish more complex orchestrations across Octopus and your other essential software services. The MCP server provides tools designed to solve key use-cases within change management, troubleshooting, and administration audit & compliance. The MCP server architecture ensures that your deployment data remains secure while enabling powerful AI-assisted workflows. All interactions are logged and auditable, maintaining the compliance and governance standards your organization requires. [Learn more about Octopus MCP Server](/docs/octopus-ai/mcp) ### Octopus AI Recovery Agent Rapid recovery is crucial for Continuous Delivery. The Recovery Agent helps you recover from failures fast by using AI to analyze causes, suggest fixes, and, in the future, execute remediation steps - keeping you in control while handling the heavy lifting. :::div{.info} Octopus Server communicates with foundation models via `https://aiproxy.octopus.com`. For more details, see [outbound requests](/docs/security/outbound-requests). ::: [Learn more about Octopus Recovery Agent](/docs/octopus-ai/recovery-agent) ## Getting Started Begin your AI-powered DevOps journey today with the Octopus AI Assistant, or the Octopus MCP Server. As an Early Access participant, you'll help shape these features while gaining early access to capabilities that will transform how teams deploy software. - [Getting Started with Octopus AI Assistant](/docs/octopus-ai/assistant/getting-started) - [Getting Started with Octopus MCP Server](https://github.com/OctopusDeploy/mcp-server?tab=readme-ov-file#-installation) ## Security and Privacy All AI capabilities in Octopus Deploy are built with security and privacy as foundational principles. We never use customer data to train AI models, all sensitive values remain protected through our existing security model, and AI features respect your current permission boundaries. The AI Assistant backend is [open source](https://github.com/OctopusSolutionsEngineering/OctopusCopilot) and has been independently audited, with results available through our [trust center](https://trust.octopus.com/). # Octopus AI Assistant Source: https://octopus.com/docs/octopus-ai/assistant.md Octopus AI Assistant integrates AI functionality directly into the Octopus Deploy interface to accelerate your DevOps workflows. Whether you're getting started with Octopus Deploy, troubleshooting deployment failures, or optimizing existing configurations, the AI Assistant provides context-aware guidance and automation to help you work more efficiently. ![Octopus AI Assistant Screenshot](/docs/img/octopus-ai-assistant/octopus-ai-assistant.png) To begin, see our [Getting Started](/docs/octopus-ai/assistant/getting-started) guide for setup instructions. ## Core capabilities The Octopus AI Assistant provides four main capabilities designed to support different aspects of your deployment workflow: 1. Tier-0 support 2. Prompt-based project creation 3. Deployment failure analysis 4. Best practices optimization Combined with these features, the AI Assistant makes managing deployments easier and keeps things running smoothly. ### Tier-0 support Get instant answers to common questions about Octopus Deploy without searching through documentation. The AI Assistant draws from the complete Octopus Deploy documentation to provide natural language responses to your questions. Example prompts: - `What is a project in Octopus Deploy?` - `How do I use runbooks?` - `Explain the difference between environments and tenants` - `How do I set up automated deployments?` This capability is particularly valuable for new team members getting started with Octopus Deploy, or when you need quick clarification on specific features without leaving your workflow. ### Prompt-based project creation Create fully configured deployment projects from simple text descriptions. Instead of manually setting up configurations, processes, and environments, describe what you want to deploy and let the AI Assistant generate a complete project using proven best practices. Example prompts: - `Create an Azure Web App project called "My Web App"` - `Generate an AWS Lambda project with QA and Production environments` [Learn more about project creation.](/docs/octopus-ai/assistant/project-creation) ### Deployment failure analysis When deployments fail, get immediate analysis of what went wrong and actionable steps to resolve issues. The analyzer examines deployment logs, process configuration, and error details to provide specific troubleshooting guidance. Example prompts: - `Why did the deployment fail?` - `Help me understand this deployment error` [Learn more about deployment failure analysis.](/docs/octopus-ai/assistant/deployment-failure-analyzer) ### Best practices optimization Identify optimization opportunities and maintain healthy configurations across your Octopus Deploy instance. The advisor analyzes your setup and provides recommendations for improving scalability, reducing technical debt, and following established best practices. Example prompts: - `Find unused variables in this project` - `Check for duplicate project variables` - `Suggest tenant tags to make tenants more manageable` [Learn more about the best practices adviser.](/docs/octopus-ai/assistant/best-practices-adviser) ## Custom prompts for organizational needs For teams with specific internal processes, you can enhance any AI Assistant capability with [custom prompts](/docs/octopus-ai/assistant/custom-prompts) that embed your organization's knowledge, procedures, and support channels directly into the AI responses. ## FAQ **Q: What data is collected?** A: We collect prompts entered into Octopus AI Assistant. All logs are sanitized to remove personally identifiable information. We do not log: - Prompt responses - Sensitive values - Octopus configurations **Q: Is my data used to train AI models?** A: No, we do not train AI models on customer data. We use the Azure OpenAI platform, and [Azure does not use customer data to train models either](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy?tabs=azure-portal). **Q: How do I turn off Octopus AI Assistant?** A: Disabling or uninstalling the Chrome extension will disable Octopus AI Assistant. **Q: How much does the service cost?** A: The service is free, although pricing my change in the future. **Q: How secure is the service?** A: Octopus AI Assistant is implemented as an external service that accesses Octopus via the API. This means Octopus AI Assistant does not have access to any sensitive values, as the API never exposes sensitive values. It also means access to the Octopus instance is limited by the existing permissions of the current user. Additionally, Octopus AI Assistant shares the same backend as the Octopus Copilot Extension, which has been audited by an independent external security team. The report is available via the [trust center](https://trust.octopus.com/). **Q: Can I see the source code?** A: Yes. The Octopus AI Assistant backend source code is available from [GitHub](https://github.com/OctopusSolutionsEngineering/OctopusCopilot). **Q: Do I need to sign up for an account?** A: No, Octopus AI Assistant is self-contained and only requires access to an Octopus instance. **Q: Is Octopus AI Assistant a supported service?** A: Yes. Reach out to the [Octopus Support team](https://octopus.com/support) for any issues or questions. ## Troubleshooting ### Prompts fail immediately without appearing to send The extension may be set to restrict site access. In Chrome, click the kebab menu, select `Extensions` -> `Manage Extensions`. Enable `Developer mode` using the toggle in the top right. Find `Octopus AI Assistant` and click `Details`. Under `Site access`, ensure `On all sites` is selected. # Octopus Cloud Source: https://octopus.com/docs/octopus-cloud.md > We host Octopus for you Octopus Cloud is the easiest way to run Octopus Deploy. It has the same functionality as Octopus Server, delivered as a highly available, scalable, secure SaaS application hosted for you. You get the best Octopus experience from the experts in hosting, maintaining, scaling, and securing Octopus Deploy. We recommend Octopus Cloud over Octopus Server for the following reasons: ## Minimize downtime and increase resilience - We take care of any issues that arise so you don't have to worry about downtime, data loss, or disruptions. - Our 24x7 monitoring and alerting and proactive issue detection means problems get addressed before they impact your operations. - Automatic recovery with comprehensive [disaster recovery](https://octopus.com/docs/octopus-cloud/disaster-recovery#disaster-recovery-procedure) procedures means your data is safe and quickly recoverable in case of a failure. ## Secure and compliant out-of-the-box - Peace of mind with internationally recognized security standards, ensuring business compliance and protecting your reputation. - ISO 27001 and SOC II certifications with regular audits, ensuring your deployments and data are safe and secure. - Data and application layer isolation per customer, so there are no noisy neighbors or cross-talk. - OpenID Connect (OIDC) support so you can integrate with other services securely and without needing to manage credentials. - Network security and static IP addresses. - Flexibility of hosting region choices. - With IP allow listing, you can control who can reach Octopus Cloud, securing your entry points and enhancing compliance. With Azure Private Links, communication is routed via a secure, private network path, ensuring the data doesn’t traverse the public internet. When combined, these controls eliminate external exposure, reduce attack surface, and align with stringent compliance and security policies. ## Increase team efficiency - Stay ahead of the competition with less time maintaining software and more time shipping new features. - Automatic upgrades to the latest version of Octopus, including improvements, bug fixes, security enhancements, and [new features](https://octopus.com/whatsnew) before they’re released on Octopus Server. - Expert support by experienced Octopus Support and Cloud Operations teams who swiftly resolve issues and provide regular maintenance. ## Effortlessly scale your deployments - Your teams can scale their use without the hassle of resource management and additional infrastructure costs. - We provide appropriate resources and automatic scaling for seamless performance. - Ample bandwidth and storage resources to handle your data and traffic without additional costs. - Windows and Linux dynamic workers provide flexible and scalable deployment options. :::figure ![Octopus Cloud architecture diagram](/docs/img/octopus-cloud/images/octopus-cloud-architecture-diagram.png) ::: When you’re ready, [start a free account](https://octopus.com/free-signup) to explore Octopus. ## Are there any differences between Octopus Cloud and Server? In providing Octopus as a service, some configuration and diagnostic functions in Octopus Cloud differ from the Octopus Server. These include disabling specific items related to the cloud server's provisioning and management. You can learn more on the [migration page](/docs/octopus-cloud/migrations#differences-between-octopus-cloud-and-octopus-server). ## Where is Octopus Cloud hosted? \{#octopus-cloud-hosting-locations} Octopus Cloud runs in the Microsoft Azure cloud. When you create an Octopus Cloud instance, we provision a Linux container to run the Octopus Server in, along with all the other resources we need to provide Octopus as a service. We deploy this to one of our Kubernetes clusters. We host Octopus Cloud in the following Azure regions: - West US 2 - West Europe - Australia East If you’d like to move an existing Octopus Cloud instance to one of the other regions or request a new region, please [contact us](https://octopus.com/company/contact). ## Octopus Cloud storage thresholds \{#octopus-cloud-storage-limits} Octopus Cloud instances are subject to storage thresholds. - Maximum file storage for artifacts, task logs, packages, package cache, and event exports is limited to 1 TB. - Maximum database size for configuration data (for example, projects, deployment processes, and inline scripts) is limited to 100 GB. - Maximum size for any single package is 5 GB. - [Retention policies](/docs/administration/retention-policies) default to 30 days, but you can change this figure as needed. If you think you will exceed these thresholds, please [contact our Sales team](https://octopus.com/company/contact). Please see our [Cloud pricing FAQ](https://octopus.com/pricing/faq#are-there-any-storage-limits) for further details. ## Learn more - [Octopus Cloud FAQs](/docs/octopus-cloud/frequently-asked-questions) - [Octopus Cloud pricing FAQs](https://octopus.com/pricing/faq#are-there-any-differences-between-cloud-and-server) - [Octopus pricing page](https://octopus.com/pricing/overview) - [Getting started with Octopus Cloud](/docs/octopus-cloud/getting-started-with-cloud) - [Migrating to Octopus Cloud](/docs/octopus-cloud/migrations) - [Octopus Cloud blog posts](https://octopus.com/blog/tag/octopus-cloud/1) - [Trust center](https://octopus.com/company/trust) # Octopus MCP Source: https://octopus.com/docs/octopus-ai/mcp.md ### Octopus MCP Server The Octopus MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) server represents a significant leap forward in AI integration capabilities. Built on Anthropic's open standard for connecting AI assistants to external data sources and tools, the MCP server enables AI assistants like Claude to interact directly with your Octopus Deploy infrastructure. The Octopus MCP server provides similar capabilities to the Octopus AI Assistant, but provides further benefits: - You can use it with your client and model of choice. - It can work alongside other MCP servers to accomplish more complex orchestrations across Octopus and your other essential software services. The MCP server provides tools designed to solve key use-cases within change management, troubleshooting, administration audit & compliance, and standardization at scale. The MCP server architecture ensures that your deployment data remains secure while enabling powerful AI-assisted workflows. All interactions are logged and auditable, maintaining the compliance and governance standards your organization requires. The Octopus MCP Server is open source, and anyone can contribute to it. It's available for free on Github at [https://github.com/OctopusDeploy/mcp-server](https://github.com/OctopusDeploy/mcp-server) ## Security The Octopus MCP Server works by communicating with your Octopus instance's REST API via a secure HTTPS connection. It leverages Octopus Server's existing API Key security mechanism, ensuring all interactions are authenticated, authorized for the permissions associated with the API Key, and audited. To learn more, read our [Octopus REST API](/docs/octopus-rest-api) documentation. ## 🚀 Installation ### Requirements - Node.js >= v20.0.0 - Octopus Deploy instance that can be accessed by the MCP server via HTTPS - Octopus Deploy API Key ### Configuration Full example configuration (for Claude Desktop, Claude Code, and Cursor): ```json { "mcpServers": { "octopusdeploy": { "command": "npx", "args": ["-y", "@octopusdeploy/mcp-server", "--api-key", "YOUR_API_KEY", "--server-url", "https://your-octopus.com"] } } } ``` The Octopus MCP Server is typically configured within your AI Client of choice. It is packaged as an npm package and executed via Node's `npx` command. Your configuration will include the command invocation `npx`, and a set of arguments that supply the Octopus MCP Server package and provide the Octopus Server URL and API key required, if they are not available as environment variables. The command line invocation you will be configuring will be one of the two following variants: ```bash npx -y @octopusdeploy/mcp-server ``` With configuration provided via environment variables: ```bash OCTOPUS_API_KEY=API-KEY OCTOPUS_SERVER_URL=https://your-octopus.com ``` Or with configuration supplied via the command line: ```bash npx -y @octopusdeploy/mcp-server --server-url https://your-octopus.com --api-key YOUR_API_KEY ``` For detailed documentation visit [the official Github repo](https://github.com/OctopusDeploy/mcp-server). # Octopus REST API Source: https://octopus.com/docs/octopus-rest-api.md > Octopus is built API-first The opinions and functionality in Octopus are designed to make you productive, but they might not work for everyone. So we've built plenty of escape hatches. You can change the default behaviors, run custom scripts in your deployment process, or use a comprehensive API that does everything the UI can do. We built Octopus Deploy API-first. This means that Octopus is built in layers. All data and operations are available over its REST API. We built the Octopus Web Portal on top of this API, so all the data and operations you can see and perform in the Octopus Web Portal, you can perform over the REST API. Octopus connects to build servers, scripts, external applications, and anything else with its REST API. We designed the Octopus REST API to: 1. Be friendly and easy to figure out. 2. Be [hypermedia driven](http://en.wikipedia.org/wiki/HATEOAS), using links and the occasional [URI template](http://tools.ietf.org/html/rfc6570). 3. Be comprehensive - 100% of the actions you perform via the Octopus UI, you can perform via the API. 4. Provide a great developer experience through [API clients](#api-clients) and [detailed examples](/docs/octopus-rest-api/examples). ## Octopus Command Line (CLI) The Octopus CLI is a command line tool that builds on top of the [Octopus Deploy REST API](https://octopus.com/docs/octopus-rest-api). With the Octopus CLI, you can push your application packages for deployment as either zip or NuGet packages, and manage your environments, deployments, projects, and workers. The Octopus CLI can be used on Windows, Mac, Linux, and Docker. For installation options and direct downloads, visit the [CLI Readme](https://github.com/OctopusDeploy/cli/blob/main/README.md). For more information see [Octopus Command Line (CLI)](https://octopus.com/docs/octopus-rest-api/cli). ## Next steps Follow our [getting started with the Octopus REST API](/docs/octopus-rest-api/getting-started) guide or learn [how to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key). # OpenID Connect authentication Source: https://octopus.com/docs/security/authentication/oidc-authentication.md Octopus Deploy supports authentication using OpenID Connect (OIDC) from a third-party identity provider, such as: - [Microsoft Entra ID](/docs/security/authentication/oidc-authentication/configuring-microsoft-entra) - [Google Workspaces](/docs/security/authentication/oidc-authentication/configuring-google-apps) - [Okta](/docs/security/authentication/oidc-authentication/configuring-okta) - [Azure Directory Federation Services (AD FS)](/docs/security/authentication/oidc-authentication/configuring-adfs) - [Keycloak](/docs/security/authentication/oidc-authentication/configuring-keycloak) - [Authentik](/docs/security/authentication/oidc-authentication/configuring-authentik) - [Ping Identity](/docs/security/authentication/oidc-authentication/configuring-ping) - etc To use OIDC authentication with Octopus you will need to: 1. Configure your Identity Provider (IdP) to trust your Octopus Deploy instance, by setting up a client ID & secret. 2. Configure your Octopus Deploy instance to trust and use OIDC as an authentication provider. 3. Optionally assign users to **Teams** based on the role information provided by your identity provider. ## Configure your identity provider 1. Create a new client application in your identity provider, noting down the following details: - Issuer URL (usually the root URL of the identity provider) - Client ID - Client Secret :::div{.hint} Note that Keycloak supports multiple realms on the same server, so the `Issuer URL` needs to include the realm path, eg: `https://keycloak-server/realms/octopus` rather than just `https://keycloak-server`. ::: 1. Configure at least one **Redirect URL** to be `https://your-octopus-url/api/users/authenticatedToken/GenericOidc` (replacing `https://your-octopus-url` with the URL of your Octopus Server) 1. If you want Octopus users to be automatically assigned to teams, configure your IdP to send a list of roles or groups in a custom claim. The default role claim is `groups`, but you can configure this if required. ## Configure OIDC in Octopus Deploy 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** is optional, but should specify the claim that your identity provider returns containing role or group assignment information. - **Username Claim Type** should be set to the claim that your identity provider uses as a username. - **Resource** is optional, but allows you to specify a resource identifier for the identity provider. For example, AD FS requires a resource identifier to map custom claims. - **Scopes** has a default value, but can be configured if your identity provider requires a custom scope. The scope of `groups` is optional and can be omitted but the scopes of `openid profile email` are required for Octopus Deploy to retrieve user information from your identity provider. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer** should be set to the URL of your identity provider. - **Client ID** and **Client Secret** should be set based on the newly created client application in your identity provider. :::div{.hint} Note that the value of **Client Secret** cannot be retrieved once set - it can only be changed or deleted ::: - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. ## Assign external groups or roles to Octopus teams (optional) If you followed the optional steps to include role information as a claim, you can assign them to **Teams** in the Octopus Portal. 1. Open the Octopus Portal and select **Configuration ➜ Teams**. 1. Either create a new **Team** or choose an existing one. 1. Under the **Members** section, select the option **Add External Group/Role**. ![Adding Octopus Teams from external providers](/docs/img/security/authentication/images/add-octopus-teams-external.png) [Can't see the "Add External Group/Role" button?](#add-external-grouprole-button-is-missing) 1. Create a mapping from the groups or roles in your identity provider to a team in Octopus Deploy. In this example, we need to supply `octopusTesters` as the **Group/Role ID** and `OctopusTesters` as the **Display Name**. ![Add Octopus Teams Dialog](/docs/img/security/authentication/images/add-octopus-teams-external-dialog.png) The **Group/Role ID** value must match the value of the claim in the security token. The matching is performed without case sensitivity. 1. Save your changes by clicking the **Save** button. ## Troubleshooting We do our best to log warnings to your Octopus Server log whenever possible. If you are having difficulty configuring Octopus to authenticate with your identity provider, be sure to check your [server logs](/docs/support/log-files) for warnings. Your identity provider likely also has logging functionality, so check to see what the logs report on that side too. ### Double- and triple-check your configuration Unfortunately security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember things are case-sensitive. - Remember to remove or add slash characters - they matter too! ### Check OpenID Connect metadata is working You can see the OpenID Connect metadata by going to the Issuer address in your browser adding `/.well-known/openid-configuration` to the end. ### Inspect the contents of the security token :::div{.warning} **Inspection of a JWT is impossible with OAuth code flow with PKCE** Please note: It's impossible to inspect the JWT within the Network tab of your browser's developer tools if you use OAuth code flow with PKCE (with a Client Secret specified in your OIDC configuration in Octopus). If you'd like to use it for troubleshooting, you would need to remove the Client Secret, which would revert to Implicit flow authentication. We have plans to improve this in an upcoming version of Octopus, allowing more debug information to be visible while using PKCE. ::: Perhaps the contents of the security token sent back by your identity provider aren't exactly the way Octopus expected, especially certain claims which may be missing or named differently. This will usually result in the external user incorrectly mapping to a different Octopus User than expected. The best way to diagnose this is to inspect the JSON Web Token (JWT) which is sent from your IdP to Octopus via your browser. To inspect the contents of your security token: 1. Open the Developer Tools of your browser and enable Network logging making sure the network logging is preserved across requests. 2. In Chrome Dev Tools this is called "Preserve Log". In Firefox this is called "Persist Logs". ![The preserve log checkbox in the browser's developer tools](/docs/img/security/authentication/images/5866122.png) 3. Attempt to sign into Octopus using OIDC and find the HTTP POST coming back to your Octopus instance from your identity provider on a route like `/api/users/authenticatedToken/GenericOidc`. You should see an `id_token` field in the HTTP POST body. 4. Grab the contents of the `id_token` field and paste that into [https://jwt.io/](https://jwt.io/) which will decode the token for you. ![A screenshot of the JWT debugger](/docs/img/security/authentication/images/5866123.png) 5. Octopus uses most of the data to validate the token, but primarily uses the `sub`, `email` and `name` claims. If these claims are not present you will likely see unexpected behavior. 6. If you are not able to figure out what is going wrong, please send a copy of the decoded payload to our [support team](https://octopus.com/support) and let them know what behavior you are experiencing. ### Octopus user accounts are still required Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. If you have enabled **Allow Auto User Creation**, Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. :::div{.hint} **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a matching **Identifier**. If there is no match, Octopus looks for a user account with a matching **Email Address**. If a user account is found, the External Identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. ::: :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ### Add External Group/Role button is missing At least one authentication provider must be enabled and have valid configuration for this button to show. If the **Add External Group/Role** button is missing, check your OIDC configuration. Ensure that it is **Enabled**, there are no errors and that the required fields are populated. ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. # Packaging applications Source: https://octopus.com/docs/packaging-applications.md > How to package your applications for deployment with Octopus Deploying software with Octopus often involves deploying packages. This section explains how to package your applications for deployment with Octopus. Before you can deploy a package you need to: 1. Give your package a [package ID](#package-id). 1. Choose and apply a [versioning scheme](#version-numbers). 1. Create the package in a [supported format](#supported-formats). 1. Host the package in a [package repository](/docs/packaging-applications/package-repositories). ## Package ID {#package-id} Package IDs must conform to the following specifications: - Package IDs must be unique within your Octopus Deploy instance. - Package IDs consist of one or more segments separated by one of the following separator characters: `-` `.` `_`. - Segments contain only alphanumeric characters. For instance, the package ID in this sample package is `hello-world`. > [hello-world.1.0.0.zip](https://octopus.com/images/docs/hello-world.1.0.0.zip) Avoid using numbers in your package ID as it could result in the version number being incorrectly parsed. ## Version numbers {#version-numbers} Octopus supports [Semantic Versioning](/docs/packaging-applications/create-packages/versioning/#semver), unless you are deploying artifacts to a [Maven repository](/docs/packaging-applications/package-repositories/maven-feeds) in which case you will need to use [Maven Versions](/docs/packaging-applications/create-packages/versioning/#maven). The version number needs to be applied to your package after the package ID and before the format. For instance. The version number in our sample package is **1.0.0**. > [hello-world.1.0.0.zip](https://octopus.com/images/docs/hello-world.1.0.0.zip) Learn more about [versioning schemes](/docs/packaging-applications/create-packages/versioning). ## Package dependencies and structure When you package your applications, you need to include all the binaries that are required to run the application, and structure the package the way you want it to appear after it has been extracted. ## Supported formats {#supported-formats} It is important that your packages have the correct **file extension** because Octopus uses the **file extension** to determine the correct extraction algorithm to use with your packages. Only NuGet packages will have extra metadata like release notes and description extracted from the package metadata. | Package type | File Extensions | Built-in repository | External feed | Notes on external feeds | | --------------- | ------------------------ | ------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | NuGet | .nupkg | Yes | Yes | Learn about NuGet on the [official NuGet website](http://docs.nuget.org/docs/start-here/overview). | | Zip | .zip | Yes | Yes | | | JAR WAR EAR RAR | .jar, .war, .ear, .rar | Yes | Yes | Learn about [Maven Feeds](/docs/packaging-applications/package-repositories/maven-feeds) and Octopus. RAR files a Java Resource Adaptor files, not compressed archive formats. | | Tar | .tar | Yes | Yes | | | Tar + Gzip | .tgz, .tar.gz, .tar.Z | Yes | Yes | | | Tar + Bzip2 | .tar.bz, .tar.bz2, .tbz* | Yes | | | | NPM | .tgz, .tar.gz | | Yes | | | Docker Image | | | Yes | [Docker Registries](/docs/packaging-applications/package-repositories/docker-registries/). Learn about [Docker](/docs/deployments/docker) and Octopus. | | Helm Chart | .tgz | Yes | Yes | [Helm Chart Repositories](https://helm.sh/docs/topics/chart_repository/). Learn about [Helm](/docs/deployments/kubernetes/helm-update) and Octopus. | ## Learn more - [Create packages](/docs/packaging-applications/create-packages) - [Build servers](/docs/packaging-applications/build-servers) - [Package repositories](/docs/packaging-applications/package-repositories) # Platform Engineering Source: https://octopus.com/docs/best-practices/platform-engineering.md This section describes how to implement platform engineering practices with Octopus to manage the configuration of one or more instances and spaces at scale. ## Further reading The book [DevEx as a Service with Platform Engineering](https://github.com/OctopusSolutionsEngineering/PlatformEngineeringBook/) provides a high level discussion of platform engineering and how it can be used to positively impact the developer experience of DevOps teams. The [DevOps engineer’s handbook](https://octopus.com/devops/) also discusses platform engineering in the context of DevOps teams. # Platform Hub Source: https://octopus.com/docs/platform-hub.md > An overview of Platform Hub [Platform Hub](https://octopus.com/blog/introducing-platform-hub) is a new capability in Octopus that helps platform teams standardize how software is delivered across teams using connected templates and enforceable policies. Together, these features create a governance layer for software delivery, making it easier for platform teams to scale best practices, reduce drift, and deliver with confidence. You can create and manage your templates and policies from Platform Hub. - [Process templates](/docs/platform-hub/templates/process-templates) are reusable sets of deployment steps that can be shared across multiple spaces in Octopus Deploy - [Project templates](/docs/platform-hub/templates/project-templates) are reusable project blueprints that teams can use as a starting point for new projects. Teams supply the required parameter values but can't modify the deployment process. - [Policies](/docs/platform-hub/policies) in Octopus are designed to ensure compliance and governance by default, making enforcing pre- and post-deployment controls at scale easier. :::div{.hint} To access Platform Hub, users must have **PlatformHubEdit** and **PlatformHubView** permissions enabled. These permissions can only be assigned to system teams. [System administrators](/docs/security/users-and-teams/default-permissions#DefaultPermissions-SystemAdministrator) and [system managers](/docs/security/users-and-teams/default-permissions#DefaultPermissions-SystemManager) have **PlatformHubEdit** and **PlatformHubView** permissions enabled by default. ::: To get started, configure your version control. :::figure ![The overview page for Platform Hub](/docs/img/platform-hub/platform-hub-overview.png) ::: :::div{.hint} This Git repository will be used for all features in Platform Hub. ::: ## Accounts in Platform Hub You can create and manage accounts in Platform Hub that can be used inside templates. You can create the following account types by visiting the **Accounts** area in Platform Hub. - AWS Accounts - Azure Accounts - Google Cloud Account - Username/Password - Generic OIDC To use these accounts inside a process template, you must create a parameter that references these accounts first. :::figure ![Accounts in Platform Hub](/docs/img/platform-hub/platform-hub-accounts.png) ::: ## Git Credentials in Platform Hub You can create and manage Git credentials in Platform Hub by visiting the Git credentials area in the Platform Hub navigation menu. You can use Git credentials inside your templates by selecting them from a dropdown in the step field that requires them. :::figure ![Platform Hub Git credentials area](/docs/img/platform-hub/platform-hub-git-credential.png) ::: ## GitHub App Connections in Platform Hub You can connect your GitHub accounts to Platform Hub using the Octopus GitHub App. This lets you use a GitHub App Connection when configuring Platform Hub's version control settings, without needing a personal access token. :::div{.hint} GitHub App Connections in Platform Hub can only be used to configure Platform Hub's version control settings. These GitHub Connections are scoped only to Platform Hub, and cannot be used in spaces. They also cannot be used in steps in process templates or project templates currently. ::: ### Set up a GitHub App Connection To configure a GitHub App Connection in Platform Hub, navigate to **GitHub Connections** and follow the same steps as [connecting a GitHub account in a space](/docs/projects/version-control/github#connecting-a-github-account). :::figure ![GitHub Connections page in Platform Hub](/docs/img/platform-hub/platform-hub-github-connections.png) ::: ### Use a GitHub App Connection for version control Once you've configured a connection, you can select it when setting up Platform Hub's version control. 1. Navigate to **Version Control** in Platform Hub. 2. Select "GitHub". 3. Under GitHub Repository, choose your GitHub Connection and the repository where your Platform Hub configurations will be stored. 4. Save your settings. :::figure ![Version control configuration in Platform Hub using a GitHub App Connection](/docs/img/platform-hub/platform-hub-version-control-github-connection.png) ::: # Compliance Reports Source: https://octopus.com/docs/platform-hub/compliance-reports.md > An overview of Compliance Reports The Compliance Reports area is a new addition to Platform Hub, designed to transform raw audit data into actionable insights. While you can already access granular logs via the [Audit](/docs/security/users-and-teams/auditing) tab or integrate with SIEM tools using the [audit stream](/docs/security/users-and-teams/auditing/audit-stream), Compliance Reports streamline reporting for Governance, Risk, and Compliance (GRC) teams. Compliance Reports provide a centralized, audit-ready visibility into your software delivery lifecycle. They are designed to help Governance, Risk, and Compliance (GRC) teams quickly verify security controls and maintain a clear trail of deployment activity across your entire instance. :::div{.hint} Compliance reports are currently in Alpha. If you encounter any issues, please contact our [support team](https://octopus.com/support). ::: Our reports focus on answering two critical questions: 1. **Deployment Permissions**: Which users are authorized to deploy specific projects, and to which environments? 2. **Deployment History**: Who initiated deployments, and when did those events occur? ## Deployment Permissions The Deployment Permissions report provides a comprehensive map of access across your instance, allowing you to audit which users possess the authority to trigger deployments. By cross-referencing user roles with specific projects and environments, this report helps GRC teams validate that the principle of least privilege is being enforced and ensure that only authorized personnel can deploy. ### Running the report To run the Deployment Permissions report, navigate to **Platform Hub -> Reports**, and choose the **Deployment Permissions** card: :::figure ![The Compliance reports page where users select a report to run](/docs/img/platform-hub/compliance/compliance-reports-tiles.png) ::: Select one or more environments from your Octopus Spaces to view Deployment permissions for: :::figure ![The environment selector for the deployments permission report](/docs/img/platform-hub/compliance/deployment-permissions-report-choose-envs.png) ::: Finally, click on the **Create Report** button and view the projects for the selected environments, and the users that can deploy to each project in that environment: :::figure ![The deployments permission report once it has been run](/docs/img/platform-hub/compliance/deployment-permissions-report-executed.png) ::: You can expand each project to see a list of the users that have permissions to deploy to that environment. You can also click the **Download CSV** button to generate a CSV file of the executed report. ## Deployment History The Deployment History report captures deployment activity across your instance, giving GRC teams a streamlined view of who deployed what, where, and when. It supports point-in-time compliance reviews and post-incident investigations by letting you filter deployments by date, space, environment, project, tenant, or deployer. ### Running the Deployment History report To run the Deployment History report, navigate to **Platform Hub -> Reports**, and choose the **Deployment History** card: :::figure ![The Compliance reports page where users select a report to run](/docs/img/platform-hub/compliance/compliance-reports-tiles.png) ::: Select a date range from the dropdown — choose from **Today**, **Last 7 Days**, **Last 30 Days**, **Last 90 Days**, **Last 365 Days**, or **Custom** to specify exact dates. To narrow the results further, click **Show advanced filters** and select any combination of Spaces, Environments, Projects, Tenants, and Deployers. Click **Reset** to clear the advanced filters. The report lists each deployment with its status, project, version, space, environment, tenant, deployer, and the time it was deployed: :::figure ![The deployment history report showing deployments for the selected date range and filters](/docs/img/platform-hub/compliance/deployment-history-report-executed.png) ::: Click **Download** to generate a CSV file of the executed report, or use the **Share** icon to share the current view, including any applied filters. ## Feedback We'd love to get [feedback on what reports you need](https://roadmap.octopus.com/submit-idea) to make compliance easier! # Policies Source: https://octopus.com/docs/platform-hub/policies.md > An overview of Policies Policies in Octopus are designed to ensure compliance and governance by default, making it easier to enforce deployment controls at scale. This approach allows you to shift compliance left, alleviating the burden of manual audits and enabling you to maintain high standards across your organization. With policies, you can enforce organization-wide compliance across teams and regions, moving governance out of Confluence docs and Slack threads and into the heart of your delivery pipeline. Using Rego, you can write custom policy checks that align with your requirements, block non-compliant deployments, and access detailed audit logs of policy evaluation events. This method ensures compliance is not an afterthought; it is embedded within every deployment pipeline, providing a seamless and efficient way to uphold governance standards across all activities. ## When to use Policies Policies streamline the enforcement of standards across all deployments by automating compliance checks and governance measures. Consider implementing policies if: - You want to ensure that every deployment conforms to predefined standards without manual effort. - You wish to manage these standards centrally, allowing for consistent application across your organization and easy updating of standards. While policies may not be necessary in every deployment scenario, they are invaluable if maintaining compliance and security is a priority. By embedding policies into your deployments, you can minimize risks and ensure that all teams are aligned with your organizational standards. ## What can you enforce with policies? Policies give you the flexibility to enforce virtually any standard across your deployments and runbook runs. When an execution starts, Octopus provides detailed information about the deployment or runbook run to the policy engine, allowing you to evaluate it against your requirements. Common use cases include: - Requiring specific steps (like manual interventions or approvals) in production deployments - Ensuring all packages come from approved branches - Validating that certain steps aren't skipped or disabled - Enforcing step ordering requirements - Checking that deployments meet environment-specific criteria - Verifying projects and tenants have required tags By default, policies scope to both deployment processes and runbook runs unless you specify otherwise. ## Getting started All policies are written in Rego and saved as an OCL file under a policies folder in your Platform Hub repository. If you need to setup your Platform Hub repository see [Platform Hub](/docs/platform-hub/). For a comprehensive guide to Rego, please visit the official [documentation.](https://www.openpolicyagent.org/docs/policy-language) If you would like to jump straight to examples that are more representative of the deployment scenario you want to enforce, please visit our [examples page](/docs/platform-hub/policies/examples). In our example below, we are writing a policy that checks for the existence of a manual intervention step whenever deployments go to production. ## Building your first policy ### 1. Create your policies file To get started, navigate to the Platform Hub inside of your Octopus instance and click on the Policies section. To create your first policy click the `Create Policy` button. :::figure ![A empty policies list in the Platform Hub](/docs/img/platform-hub/policies/policies-getting-started.png) ::: ### 2. Select a starter policy You will be presented with the Create Policy modal. The first step is to select a starter policy to base your new policy on. To continue click the `Next` button. :::figure ![A modal to select a starter policy](/docs/img/platform-hub/policies/policies-create-starter-modal.png) ::: :::div{.hint} If you want to start with the most basic policy, choose Create Blank Policy. ::: ### 3. Give your policy a name You can then set the name of your policy. Octopus will generate a valid slug for your policy based on the name you provide. You can edit this slug before clicking the `Done` button. :::figure ![A modal to create a new policy](/docs/img/platform-hub/policies/policies-create-modal.png) ::: :::div{.hint} The slug can not be changed once a policy is created. ::: ### 4. Update your policy details This will create the Policy file in your Platform Hub repository and then take you to the edit Policy page, where you can update the following details for your policy. - **Name** - a short, memorable, unique name for this policy. - **Description** - an optional description. - **Violation Reason** - a custom message provided to users when they fail to meet the conditions of a policy. - **Violation Action** - determines what happens when a deployment or runbook run doesn't comply with the policy. - **Scope Rego** - Rego to scope whether a policy should be evaluated for a particular deployment or runbook run. - **Conditions Rego** - Rego to determine the rules that a deployment or runbook run will be evaluated against. :::figure ![The form used to edit a policy](/docs/img/platform-hub/policies/policies-edit-getting-started.png) ::: :::div{.hint} - ```violation_reason``` can be overridden by the value of the ```reason``` property defined in the output result of the conditions Rego code. - ```violation_action``` can be overridden by the value of the ```action``` property defined in the output result of the conditions Rego code. See ::: ### 5. Define the policy scope You’ll now need to define the policy's scope, as Rego. Octopus will provide data about your deployments to the policy engine to use during evaluation. When you are writing your Rego code for scoping or conditions, this input data is available under the value ```input.VALUE```. This scope section of the policy defines the package name, which must match the underlying .ocl file name the policy is stored in. By default, the policy evaluates to false. The scope will evaluate to true if the deployment is going to the Production environment, for the ACME project, and in the Default space - all three conditions must be true at the same time.
For example, Octopus provides the environment details that you are deploying to. ```json { "Environment": { "Id": "Environments-1", "Name": "Development", "Slug": "development", "Tags": ["country/australia", "animal/octopus"] } } ``` To use the environment name in your Rego, you would add the following: ```json input.Environment.Name = "Development" ``` Our example applies only to deployments and runbook runs to the production environment for the ACME project, in the default space. **All Rego code has to have a package defined, which is the policy slug.** :::div{.warning} - You cannot rename **evaluate**, it must be called **evaluate**. - The package name must be the same as your policy file name. ::: ```ruby package manual_intervention_required default evaluate := false evaluate := true if { input.Environment.Name == "Production" input.Project.Name == "ACME" input.Space.Name == "Default" } ``` ### 6. Define the policy conditions After defining your scope, you must specify the policy rules. These rules are written in Rego. Octopus will check the results of your Rego code to determine if a deployment complies with the policy. The result should contain a composite value with the properties **allowed** and an optional **reason** and **action**. In this example, we will set the default rule result to be non-compliant. Any deployment that does not meet the policy rules will be prevented from executing. This conditions section of the policy defines the package name, which must match the slug for your policy. By default, the policy evaluates to false. The condition will evaluate to true if the deployment contains the required steps. :::div{.warning} - You cannot rename **result**, it must be called **result**. - The package name must be the same as your policy file name. ::: ```ruby package manual_intervention_required default result := {"allowed": false} ``` :::div{.info} Full details on the data available for policy scoping and conditions can be found under the [schema page](/docs/platform-hub/policies/schema). :::
### 7. Check for a deployment step After you’ve set the default state, you’ll need to define the policy rules that will update the **result** state to be true so the deployment can execute. In this example, the deployment must contain at least one manual intervention step. We can do this by checking the step.ActionType is “Octopus.Manual”
```ruby package manual_intervention_required default result := {"allowed": false} result := {"allowed": true} if { some step in input.Steps step.ActionType == "Octopus.Manual" } ```
After your policy details have been finalized you will need to commit, publish and activate your policy for it to be available for evaluation. ### 8. Saving a Policy Once you've finished making changes to your policy you can commit them to save the changes to your Git repository. You can either **Commit** with a description or quick commit without one. :::figure ![The commit experience for a policy](/docs/img/platform-hub/policies/policies-commit-experience.png) ::: ### 9. Publishing a Policy Once you've made your changes, you will have to publish the policy to reflect the changes you've made. You will have three options to choose from when publishing changes: - Major changes (breaking) - Minor changes (non-breaking) - Patch (bug fixes) :::div{.hint} The first time you publish a policy you can only publish a major version ::: :::figure ![Publish experience for a policy](/docs/img/platform-hub/policies/policies-publishing.png) ::: ### 10. Activating a policy You must activate the policy before it can be evaluated. Policies can be deactivated after they are activated to stop a policy from being evaluated. :::div{.hint} Activation settings can be updated anytime, from the Versions tab on the edit policy page ::: :::figure ![Activation status for a policy](/docs/img/platform-hub/policies/policies-activation.png) ::: ### 11. Finalize and test your policy You’ve now defined a basic policy to ensure a manual intervention step is present when deploying to any environment. You can test this policy by customizing the values in the scope block, and then deploying to an environment. If you choose not to include the manual intervention step in your process, you will see errors in the task log and project dashboards when you try to run the deployment. All policy evaluations will appear in the Audit log (**Configuration** → **Audit**) with the “Compliance Policy Evaluated” event group filter applied. Audit logs and Server Tasks will only appear for deployments within the policy's scope.
:::div{.hint} - If you wish to see more comprehensive examples for other deployment scenarios, please visit the [examples page](/docs/platform-hub/policies/examples). - If you wish to see the schema of inputs available for policies, please visit the [schemas page](/docs/platform-hub/policies/schema). ::: ## Policy evaluation information If you want to see what information was provided to the policy engine when it evaluates a deployment, you can do so in the task log. Every deployment, whether they succeeded or failed due to a policy evaluation, will show information in the following places: 1. Task logs :::figure ![The task logs showing policy audit records](/docs/img/platform-hub/policies-task-log.png) :::
2. Project dashboards :::figure ![Dashboards showing policy errors](/docs/img/platform-hub/policies-dashboard-notification.png) :::
3. Audit records :::figure ![Audit log containing policy evaluation records](/docs/img/platform-hub/policies-audit-log.png) :::
You can see what information was evaluated at the time of policy evaluation by using the verbose option in the task logs. This is useful if you want to troubleshoot a policy and see if it is evaluating deployments correctly. :::figure ![Verbose options shown in task logs](/docs/img/platform-hub/policies-verbose-task-log.png) ::: # Policies examples Source: https://octopus.com/docs/platform-hub/policies/examples.md > Examples of policies for different deployment scenarios There are many different deployment scenarios that you might have that need to be evaluated in order to meet policy conditions. You can use this page as a reference document to help you quickly get started with enforcing policies. ## How to use these examples You can create policies using the editor available when editing a policy in the Platform Hub or by writing OCL files directly in your Git repository. The examples below show the Rego code for both the scope and conditions sections that you'll need. ### Using the policy editor When creating a policy using the policy editor in Platform Hub: 1. Enter the policy name, description, violation action and violation reason in the UI fields 2. Add the package name at the top of both the Scope and Conditions editors - this must match your policy's slug 3. Copy the scope Rego code into the Scope editor (including the package declaration) 4. Copy the conditions Rego code into the Conditions editor (including the package declaration) For example, if your policy slug is `manual_intervention_required`, you need to include `package manual_intervention_required` at the top of both editors. ### Using OCL files If you prefer to write policies as OCL files in your Git repository, see the [Writing policies as OCL files](#writing-policies-as-ocl-files) section at the end of this page for the complete format. ## Scoping examples The following examples will cover various ways that you can scope your policies: ### Scope policy to a space or many spaces ```ruby package scope_example default evaluate := false evaluate if { # input.Space.Name == "" - If you want to use Space Name # input.Space.Id == "" - If you want to use Space Id # input.Space.Slug in ["", ""] - If you want to check multiple Spaces input.Space.Slug == "" } ``` ### Scope policy to an environment or many environments ```ruby package scope_example default evaluate := false evaluate if { # input.Environment.Name == "" - If you want to use Environment Name # input.Environment.Id == "" - If you want to use Environment Id # input.Environment.Slug in ["", ""] - If you want to check multiple Environments input.Environment.Slug == "" } ``` ### Scope policy to a project or many projects ```ruby package scope_example default evaluate := false evaluate if { # input.Project.Name == "" - If you want to use Project Name # input.Project.Id == "" - If you want to use Project Id # input.Project.Slug in ["", ""] - If you want to check multiple Projects input.Project.Slug == "" } ``` ### Scope policy to all except a particular project ```ruby package scope_example default evaluate := true evaluate := false if { # input.Project.Slug == "" - If you want to exclude one project # input.Project.Slug in ["", ""] - If you want to exclude multiple projects input.Project.Slug == "" } ``` ### Scope policy to runbook runs only ```ruby package scope_example default evaluate := false evaluate if { input.Runbook } ``` ### Scope policy to a runbook and its runs ```ruby package scope_example default evaluate := false evaluate if { # input.Runbook.Name == "" - If you want to use Runbook Name # input.Runbook.Snapshot == "" - If you want to use Runbook Snapshot # input.Runbook.Id in ["", ""] - If you want to check multiple Runbooks input.Runbook.Id == "" } ``` ### Scope policy to deployments only ```ruby package scope_example default evaluate := false evaluate if { not input.Runbook } ``` ## Conditions examples The following examples will cover different deployment scenarios that can be enforced with policies: ### Check that a step isn't skipped in a deployment ```ruby package all_steps_are_not_skipped default result := {"allowed": false} # Check all steps are not skipped result := {"allowed": true} if { count(input.SkippedSteps) == 0 } ``` ### Check that all deployment steps are enabled ```ruby package all_steps_must_be_enabled default result := {"allowed": true} # Check if any steps are disabled result := {"allowed": false} if { some step in input.Steps step.Enabled == false } ``` ### Check that a step exists at the beginning or at the end during execution ```ruby package check_step_location default result := {"allowed": false} # Step is at the start result := {"allowed": true} if { input.Steps[0].Source.SlugOrId == "" } # Step is at the end result := {"allowed": true} if { input.Steps[count(input.Steps)-1].Source.SlugOrId == "" } ``` ### Check that a Step Template isn't skipped or disabled during a deployment ```ruby package step_template_is_executed default result := {"allowed": false} result := {"allowed": true} if { some step in input.Steps step.Source.Type == "Step Template" step.Source.SlugOrId == "" not step.Id in input.SkippedSteps step.Enabled == true } ``` ### Check that a Step Template is of a certain version when deployments occur ```ruby package step_template_with_version_is_executed default result := {"allowed": false} result := {"allowed": true} if { some step in input.Steps step.Source.Type == "Step Template" step.Source.SlugOrId == "" step.Source.Version == "" not step.Id in input.SkippedSteps step.Enabled == true } ``` ### Check that a Process Template is present, and not skipped Process template can include multiple steps ```ruby package process_template_is_executed default result := {"allowed": false} result := {"allowed": true} if { count(process_template_steps) > 0 every step in process_template_steps { not step.Id in input.SkippedSteps } } process_template_steps := [step | some step in input.Steps step.Source.Type == "Process Template" step.Source.SlugOrId == "" ] ``` ### Check that a Process Template is enabled Process template can include multiple steps ```ruby package process_template_is_enabled default result := {"allowed": false} result := {"allowed": true} if { count(process_template_steps) > 0 every step in process_template_steps { step.Enabled } } process_template_steps := [step | some step in input.Steps step.Source.Type == "Process Template" step.Source.SlugOrId == "" ] ``` ### Check that a Process Template is at the beginning or end of a process ```ruby package process_template_location_check default result := {"allowed": false} # Process Template is at the start result := {"allowed": true} if { input.Steps[0].Source.Type == "Process Template" input.Steps[0].Source.SlugOrId == "" } # Process Template is at the end result := {"allowed": true} if { input.Steps[count(input.Steps)-1].Source.Type == "Process Template" input.Steps[count(input.Steps)-1].Source.SlugOrId == "" } ``` ### Check that a Process Template is of a certain version when deployments occur Process template can include multiple steps ```ruby package process_template_with_version_is_executed default result := {"allowed": false} result := {"allowed": true} if { count(process_template_steps) > 0 every step in process_template_steps { semver.compare(step.Source.Version, "") == 0 step.Enabled not step.Id in input.SkippedSteps } } process_template_steps := [step | some step in input.Steps step.Source.Type == "Process Template" step.Source.SlugOrId == "" ] ``` ### Check that a Process Template exists before or after certain steps ```ruby package process_template_step_ordering default result := {"allowed": false} # Process Template exists before a specific step result := {"allowed": true} if { some i, step in input.Steps step.Source.Type == "Process Template" step.Source.SlugOrId == "" some j, target_step in input.Steps target_step.Source.SlugOrId == "" i < j } # Process Template exists after a specific step result := {"allowed": true} if { some i, step in input.Steps step.Source.Type == "Process Template" step.Source.SlugOrId == "" some j, target_step in input.Steps target_step.Source.SlugOrId == "" i > j } ``` ### Check if a built-in step happens before another built-in step ```ruby package builtin_step_before_builtin default result := {"allowed": false} result := {"allowed": true} if { some i, first_step in input.Steps first_step.ActionType == "" some j, second_step in input.Steps second_step.ActionType == "" i < j } ``` ### Check if a built-in step happens after another built-in step ```ruby package builtin_step_after_builtin default result := {"allowed": false} result := {"allowed": true} if { some i, first_step in input.Steps first_step.ActionType == "" some j, second_step in input.Steps second_step.ActionType == "" i > j } ``` ### Check if a custom step template happens before a built-in step ```ruby package step_template_before_builtin default result := {"allowed": false} result := {"allowed": true} if { some i, template_step in input.Steps template_step.Source.Type == "Step Template" template_step.Source.SlugOrId == "" some j, builtin_step in input.Steps builtin_step.ActionType == "" i < j } ``` ### Check if a custom step template happens after a built-in step ```ruby package step_template_after_builtin default result := {"allowed": false} result := {"allowed": true} if { some i, template_step in input.Steps template_step.Source.Type == "Step Template" template_step.Source.SlugOrId == "" some j, builtin_step in input.Steps builtin_step.ActionType == "" i > j } ``` ### Check that a deployment contains a manual intervention step ```ruby package manualintervention default result := {"allowed": false} result := {"allowed": true} if { some step in input.Steps step.ActionType == "Octopus.Manual" not manual_intervention_skipped } result := {"allowed": false, "reason": "Manual intervention step cannot be skipped in production environment"} if { manual_intervention_skipped } manual_intervention_skipped if { some step in input.Steps step.Id in input.SkippedSteps step.ActionType == "Octopus.Manual" } ``` ### Check that a deployment have packages from main branch only ```ruby package packages_from_main_branch default result := {"allowed": true} all_packages := [pkg | some step in input.Steps; some pkg in step.Packages] result := {"allowed": false} if { count(all_packages) > 0 some pkg in all_packages pkg.GitRef != "refs/heads/main" } ``` ### Check that no steps run in parallel ```ruby package no_parallel_steps default result := {"allowed": false} result := {"allowed": true} if { # All steps should have StartAfterPrevious, not StartWithPrevious every execution in input.Execution { execution.StartTrigger != "StartWithPrevious" } } ``` ### Check that a release version is greater than required minimum This policy will block deployments in production environments, but allow deployments with warnings in other environments. The violation action for this policy has been set to `warn` as a default. ```ruby package specific_release_version default result := {"allowed": false} result := {"allowed": false, "action": "block"} if { production version_less_than_required } result := {"allowed": false} if { not production version_less_than_required } result := {"allowed": true} if { not version_less_than_required } production if { startswith(input.Environment.Slug, "prod") } version_less_than_required if { semver.compare(input.Release.Version, "1.0.0") < 0 } ``` ### Check that release is based on the main branch ```ruby package main_branch_release default result := {"allowed": false} result := {"allowed": true} if { input.Release.GitRef == "refs/heads/main" } ``` ### Check that runbook is from the main branch ```ruby package main_branch_runbook default result := {"allowed": false} result := {"allowed": true} if { input.Runbook.GitRef == "refs/heads/main" } ``` ### Check that the project and tenant have a tag from the specified tag set Example of a policy that checks tags for Environments, Tenants and Projects. ```ruby package tags default result := {"allowed": false} result := {"allowed": true} if { has_size_tags has_lang_tags } has_size_tags if { some tag in input.Tenant.Tags startswith(tag, "size/") } has_lang_tags if { some tag in input.Project.Tags startswith(tag, "lang/") } ``` ## Writing policies as OCL files If you prefer to write policies directly as OCL files in your Git repository instead of using the UI editor, you can create `.ocl` files in the `policies` folder of your Platform Hub Git repository. ### OCL file format The OCL file format wraps the Rego code with metadata about the policy. Here's the structure: ```ruby name = "Policy Name" description = "Policy description" violation_reason = "Custom message shown when policy fails" violation_action = "warn" or "block" scope { rego = <<-EOT package policy_file_name default evaluate := false evaluate if { # Your scope conditions here } EOT } conditions { rego = <<-EOT package policy_file_name default result := {"allowed": false} result := {"allowed": true} if { # Your policy conditions here } EOT } ``` ### Important notes for OCL files - The file name must match the package name in your Rego code (e.g., `checkformanualintervention.ocl` requires `package checkformanualintervention`) - You cannot use dashes in your policy file name - The package name must be identical in both the scope and conditions sections - You must include the `package` declaration in the Rego code when using OCL files # Schema for Policies Source: https://octopus.com/docs/platform-hub/policies/schema.md > A list of the inputs that are provided to the policy engine ## Input Schema Octopus has a set number of inputs that are provided to evaluate policies against deployments. The following is the full schema that is passed into the engine to evaluate deployments: ```json { "$schema": "http://json-schema.org/draft-07/schema#", "title": "Octopus Policy input schema", "type": "object", "properties": { "Environment": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Slug": { "type": "string" }, "Tags": { "type": "array", "items": { "type": [ "string" ] } } }, "required": [ "Id", "Name", "Slug", "Tags" ] }, "Project": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Slug": { "type": "string" }, "Tags": { "type": "array", "items": { "type": [ "string" ] } } }, "required": [ "Id", "Name", "Slug", "Tags" ] }, "Space": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Slug": { "type": "string" } }, "required": [ "Id", "Name", "Slug" ] }, "Tenant": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Slug": { "type": "string" }, "Tags": { "type": "array", "items": { "type": [ "string" ] } } }, "required": [ "Id", "Name", "Slug", "Tags" ] }, "ProjectGroup": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Slug": { "type": "string" } }, "required": [ "Id", "Name", "Slug" ] }, "SkippedSteps": { "type": "array", "items": { "type": [ "string" ] } }, "Steps": { "type": "array", "items": { "type": "object", "properties": { "Id": { "type": "string" }, "Slug": { "type": "string" }, "ActionType": { "type": "string" }, "Enabled": { "type": "boolean" }, "IsRequired": { "type": "boolean" }, "Source": { "type": "object", "properties": { "Type": { "type": "string" }, "SlugOrId": { "type": "string" }, "Version": { "type": "string" } }, "required": [ "Type", "SlugOrId" ] }, "Packages": { "type": "array", "items": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Version": { "type": "string" }, "GitRef": { "type": "string" } }, "required": [ "Id", "Name" ] } } }, "required": [ "Id", "Slug", "ActionType", "Enabled", "IsRequired", "Source" ] } }, "Release": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Version": { "type": "string" }, "GitRef": { "type": "string" } }, "required": [ "Id", "Name", "Version" ] }, "Runbook": { "type": "object", "properties": { "Id": { "type": "string" }, "Name": { "type": "string" }, "Snapshot": { "type": "string" }, "GitRef": { "type": "string" } }, "required": [ "Id", "Name", "Snapshot" ] }, "Execution": { "type": "array", "items": { "type": [ "object" ], "properties": { "StartTrigger": { "type": "string" }, "Steps": { "type": "array", "items": { "type": [ "string" ] } } }, "required": [ "StartTrigger", "Steps" ] } } }, "required": [ "Environment", "Project", "Space", "SkippedSteps", "Steps", "ProjectGroup", "Execution" ] } ``` ## Output Result Schema Octopus expects the conditions Rego code to define a result object that confirms to the following schema: ```json { "$schema": "http://json-schema.org/draft-07/schema#", "title": "Policy Result Schema", "type": "object", "properties": { "allowed": { "type": "boolean" }, "reason": { "type": "string" }, "action": { "type": "string", "enum": ["block", "warn"] } }, "required": ["allowed"] } ``` # Policies best practices Source: https://octopus.com/docs/platform-hub/policies/best-practices.md > Best practices for creating policies within Platform Hub ## Policies administration ### Establish a naming standard Use a [ Prefix ] - [ Policy Name ] that is easy for everyone to understand the policy's purpose. The [ Prefix ] should reflect when the policy will run. For example: - Deployments - [ Policy Name ] for policies designed to run during deployments only. - Runbook Runs - [ Policy Name ] for policies designed to run during runbooks runs only. - Deployments and Runbook Runs - [ Policy Name ] for policies for designed to run for deployments or runbooks runs. ### Turn on SIEM audit log streaming All policy evaluations are logged to the audit log. Ensure [audit log streaming](/docs/security/users-and-teams/auditing/audit-stream) is enabled to log those evaluations to Splunk, SumoLogic, or an OpenTelemetry collector. SIEM tools can provide alerting and visualizations that you can customize to your requirements. ## Creating and Updating Policies ### Start restrictive, then make generic Consider a policy that will block the execution of deployments and runbook runs. By default, that policy applies to all deployments and runbook runs. When creating a new policy, be as restrictive as possible by limiting it to: - A specific hook - such a deployment or a runbook run (not both) - A specific project That will limit a policy's "blast radius." Once you are confident the policy is working as intended, extend the policy to cover more projects or tenants. When acceptable, switch the policy to project groups or spaces. ### Provide a verbose failure reason A policy violation will be the first experience for must users with policies within Octopus Deploy. For example, when a policy blocks a deployment or runbook run. Provide a verbose failure reason to help the user self-service the solution. :::figure ![An example of a verbose policy violation error message to help users self-service](/docs/img/platform-hub/policies/policy-violation-user-message.png) ::: ### Check for both the existence of steps and if they’ve been skipped Policies can be written to check for the existence of specific steps within a deployment or runbook process. It's important to remember that in many cases those deployments and runbook processes have existed for years. Octopus Deploy has the capability to require a step and prevent it from being skipped. But it is unlikely that _all_ of those required steps in _all_ of your deployment and runbook processes have been configured to prevent them from being skipped. It is not enough for a policy to simply check for the existence of a specific step. The policy must also ensure users don't elect to skip the required step (for whatever reason). :::figure ![An example of a step that can be skipped before scheduling a deployment or runbook run](/docs/img/platform-hub/policies/a-step-that-can-be-skipped-violating-a-policy.png) ::: The resulting policy will have two conditions. :::figure ![An example of a policy that has both the existence and that isn't skipped](/docs/img/platform-hub/policies/example-of-policy-with-two-conditions.png) ::: ### Check for parallel execution Steps can be configured to run in parallel or sequentially. If your organization requires sequential execution for compliance or troubleshooting purposes, create a policy to check the `Execution` array in the input schema. Each execution phase has a `StartTrigger` property that indicates when it should run: - `StartAfterPrevious` - Steps run sequentially - `StartWithPrevious` - Steps run in parallel To enforce sequential execution, check that no execution phases have `StartTrigger` set to `StartWithPrevious`. See the [examples page](/docs/platform-hub/policies/examples) for a sample policy. # Troubleshooting Source: https://octopus.com/docs/platform-hub/policies/troubleshooting.md > Known issues that you may run into ## Troubleshooting common issues You may run into known issues when using policies. We've put together this page to help you diagnose and fix common issues. ### Windows Server missing dependency If you try to load or create a policy you might see the following error "The Compliance Policy engine failed to load. There may be missing dependencies on the machine hosting Octopus Server.". :::figure ![A error callout trying to load the policies page](/docs/img/platform-hub/policies/policies-missing-dependency.png) ::: If your host machine is running Windows Server then you are missing the Visual C++ Redistributable. To resolve this error you need to install the latest redistributable version for your machine, see [Visual C++ dependency](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-supported-redistributable-version) for more information. # Process Templates Source: https://octopus.com/docs/platform-hub/templates/process-templates.md > An overview of Process Templates ## Overview Process templates are reusable sets of deployment steps that can be shared across multiple spaces in Octopus Deploy. Instead of copying and pasting deployment processes across teams and applications, which often leads to configuration drift, unnecessary duplication, and operational debt, you create a single source of truth that any project can consume. By abstracting your best practices for deployments into Process Templates, you make it easy for teams to follow standards and accelerate delivery. To create or manage your process templates, navigate to Platform Hub. If you haven't set up your Git repository, you must do so first before creating a process template. Similarly, if you've already created templates or are joining an existing team, you'll see the existing templates on the template overview. :::figure ![The Process Templates Overview page where users create process templates](/docs/img/platform-hub/process-template-overview.png) ::: Before you can define the deployment process for your template, you must create the template first. 1. Navigate to Process Templates in Platform Hub. 2. Give the process template a **Name** and an optional **Description**. 3. Create your process template. :::figure ![The experience after creating the template with a name and description](/docs/img/platform-hub/process-template-first-creation.png) ::: You've created your process template; now, define its deployment process. A deployment process is a set of steps the Octopus Server orchestrates to deploy your software. Each process template has a single deployment process. You can use Octopus's built-in steps to define this process for your process template. :::figure ![The add step experience for a process template](/docs/img/platform-hub/process-template-add-step.png) ::: Some steps look different inside a process template. They ask for a parameter rather than allowing you to define a value. These steps ask for a resource that Platform Hub cannot define, such as Worker Pools, and you must define them inside a project. These fields accept parameters so you can define the values the process template needs inside a project. :::figure ![The run a script step asks for a worker pool parameter instead of a worker pool](/docs/img/platform-hub/process-template-step-example.png) ::: :::div{.warning} Our initial release of Process Templates does not include support for a few built-in steps. ::: Once you have set up a deployment process, you can use it in any space for a deployment or runbook. ## Parameters Parameters help you easily manage and apply the correct values during a deployment or runbook run that uses a process template. Using parameters, you can use the same process template across your projects and tailor the inputs based on the project's needs. For a full reference of supported parameter types and default values, see [Template parameters](/docs/platform-hub/templates/parameters). To create a parameter, navigate to the **Parameters** tab on a process template and add a new parameter. :::figure ![The parameters section in a process template](/docs/img/platform-hub/process-template-parameters.png) ::: ### Sensitive parameter defaults :::div{.hint} The ability to add default values for Sensitive/password box parameters is available from **Octopus 2026.1**. ::: Unlike the other parameters, sensitive default values are stored securely in the database with a unique GUID identifier. This identifier is used in the process template to reference the default sensitive value in the database. Because of this approach, sensitive default values are supported in CaC workflows. Scoping for Sensitive/password box parameters is not currently supported. You can set a default value for a sensitive parameter by navigating to the parameters tab of your process template and committing your changes. When the template is saved, sensitive default values are stored encrypted in the database with a unique identifier. In the OCL, the parameter block will look something like this: ```hcl parameter "Example Sensitive Parameter" { display_settings = { Octopus.ControlType = "Sensitive" } help_text = "An Example Sensitive Parameter" label = "An Example Sensitive Parameter" value "10d00c16-c905-43fa-90cd-088e22b31751" {} } ``` The GUID value in the OCL is a reference to the database-stored sensitive value. When the process template is used in a project or runbook, it will retrieve the sensitive value from the database. ### Parameter scoping Only Account parameters will allow you to scope them by environments. You can choose to scope them by any environment across your Octopus instance. :::div{.hint} When a process template is used inside a project, the project supplied values will take precedence over the process template provided ones for overlapping scopes. This includes unscoped project supplied values. For more information on how the precedence works, please visit the [troubleshooting page](/docs/platform-hub/templates/process-templates/troubleshooting). ::: :::figure ![The account parameter allowing scoping to environments present across Octopus instance](/docs/img/platform-hub/process-templates-account-scoping.png) ::: ## Saving, publishing, and sharing Once you've configured your process template, see [Publishing and sharing templates](/docs/platform-hub/templates/publishing-and-sharing) for how to commit, publish, and share it. ## A Hello world deployment process in a process template To define a simple deployment process in Octopus that executes a hello world script on the Octopus Server, complete the following steps: 1. Navigate to **Platform Hub** 2. Add a process template 3. Name the template, for instance, "Hello World", and add an optional description. 4. Add a deployment step. 5. Choose the type of step you'd like to add to filter the available steps. 6. Find the **Run a Script** step and add it to your deployment process. 7. In the Process Editor, give the step a name, for instance "Run a Hello World script". 8. In the Execution Location section use the **Run on the worker pool parameter** option. 9. Create a Worker Pool parameter. 10. Add the Worker Pool parameter to the **Worker Pool** field. 11. Paste the following PowerShell script into the **Inline Source Code** editor: ```powershell Write-Host "Hello, World!" ``` 12. Commit your template. 13. Publish and Share your template. 14. Visit a project, and its deployment process 15. Choose the process template you just published 16. Choose the Worker Pool in the parameters tab 17. Add any steps before or after the process template You can now deploy this process to say "Hello, World!". # Troubleshooting Source: https://octopus.com/docs/platform-hub/templates/process-templates/troubleshooting.md > Known issues that you may run into ## Troubleshooting common issues You may run into a few issues when setting up your process templates. We've put together this page to help you diagnose and fix common issues. ### Step support Process templates currently supports most Octopus steps. It currently doesn't support the following: 1. Deploy a Bicep Template 2. AWS S3 Create Bucket 3. AWS ECS This document will be updated as additional step support is added. ### Parameters and Variables If you are migrating an existing process to be used as a process template, you may run into a few issues when using parameters and variables in scripts. When copying a script from a step in a project into a process template step, you must convert project variables to use process template parameters. System variables will still work as normal. For example, consider this script that directly references project and system variables via `OctopusParameters`. ```powershell $packagePath = $OctopusParameters["Octopus.Action.Package[Trident.Database].ExtractedPath"] $connectionString = $OctopusParameters["Project.Connection.String"] $environmentName = $OctopusParameters["Octopus.Environment.Name"] $reportPath = $OctopusParameters["Project.Database.Report.Path"] cd $packagePath $appToRun = ".\Octopus.Trident.Database.DbUp" $generatedReport = "$reportPath\UpgradeReport.html" & $appToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath" New-OctopusArtifact -Path "$generatedReport" -Name "$environmentName.UpgradeReport.html" ``` The following variables should be updated to reference process templates parameters instead of project variables: 1. `$packagePath` 2. `$connectionString` 3. `$reportPath` The `$environmentName` variable is fine, as system variables will continue to work as normal. The updated script will be: ```powershell $packagePath = $OctopusParameters["Octopus.Action.Package[Template.Database.Package].ExtractedPath"] $connectionString = $OctopusParameters["Template.Database.ConnectionString"] $environmentName = $OctopusParameters["Octopus.Environment.Name"] $reportPath = $OctopusParameters["Template.Database.ChangeReportDirectory"] cd $packagePath $appToRun = ".\Octopus.Trident.Database.DbUp" $generatedReport = "$reportPath\UpgradeReport.html" & $appToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath" New-OctopusArtifact -Path "$generatedReport" -Name "$environmentName.UpgradeReport.html" ``` ### Parameter scoping Project supplied values for parameters will always take precedence over process template supplied ones. A couple scenarios that demonstrate the scoping precedence:
**1. Scoped value provided by the project and the process template.** | Origin | Name | Value | Scope | |------------------|--------------|------------------|-------------| | Process Template | AzureAccount | Account-123 | Development | | Project | AzureAccount | Account-124 | Development | When deploying to the **Development** environment, **Account-124** would be used.
**2. Scoped value provided by the process template and an unscoped value provided by the project.** | Origin | Name | Value | Scope | |------------------|--------------|------------------|-------------| | Process Template | AzureAccount | Account-123 | Development | | Project | AzureAccount | Account-124 | | When deploying to the **Development** environment, **Account-124** would be used.
**3. Scoped process template value and scoped project value for different environments.** | Origin | Name | Value | Scope | |------------------|--------------|------------------|-------------| | Process Template | AzureAccount | Account-123 | Development | | Project | AzureAccount | Account-124 | Staging | - When deploying to the **Development** environment, **Account-123** would be used. - When deploying to the **Staging** environment, **Account-124** would be used.
### Step specific issues - You cannot configure **Edit YAML** on the **Configure and apply Kubernetes resource** step. - You cannot configure cloud target discovery on steps. You must use project variables when consuming a process template in a project instead. - When referencing a file from a Git repo, for example, script, manifest, Kustomize, etc., you cannot pick the project Git repository as the source. You must supply the Git repository URL. The URL can be passed in via a parameter or hardcoded in the template itself. Hardcoding is not recommended. ### Cloning process templates You cannot clone a process template in Platform Hub through the Octopus UI. The process for cloning a process template is: 1. Clone the process template OCL file in the Platform Hub Git repository. 2. Change the name of the cloned OCL file to the desired name. 3. Change the name of the process template in the OCL file. 4. Commit and push the changes. 5. Refresh the process template list in the Octopus Deploy UI and find the newly created template. 6. Publish the template and configure the Spaces that have access to it. ### Platform Hub account limitations The following account types are not supported: 1. Token 2. SSH Platform Hub accounts cannot be used in the following situations: - Cannot be used by targets. - Cannot be used in Cloud Target Discovery. ### Public API Process templates can be created and managed through our [public API](/docs/octopus-rest-api). - Process templates are stored as code in the configured Git repository. The OCL files store all relevant information about the template - including the parameters, the steps, name, description and other settings. - The published versions and Spaces configured are stored and managed via the database. :::div{.warning} We do not currently support creating or managing process templates through the CLI or the Terraform provider. ::: See [CreateProcessTemplateUsageStep](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/Octopus.Client/Csharp/DeploymentProcesses/CreateProcessTemplateUsageStep.cs) for an example of how to configure a process template on a deployment process using [Octopus.Client](/docs/octopus-rest-api/octopus.client). ### GitHub Connections GitHub Connections is supported in Platform Hub, but it can only be used to configure Platform Hub version control. It can't be used on steps in templates. ### Losing access to an Octopus Enterprise license Process templates and all Platform Hub features are restricted to customers who have an Enterprise Tier license. When you no longer have an Enterprise license, process templates will work differently. #### What will continue to work - Existing deployments and runbook runs can be redeployed or rerun. - New releases that have a process containing process templates can be created. - New runbook runs that have a process containing process templates can be created. - Auto-scheduled deployments or runbook runs will continue to work. #### What will not work anymore - Users will lose access to Platform Hub, including the ability to create and manage all Platform Hub features. - Process templates cannot be modified inside a project. - Process templates will no longer receive updates and automatically roll forward to a later version. - Projects that contain process templates cannot be cloned until the process template is removed. ### Output Variables To reference output variables from process template steps, add `.ProcessTemplate` to the standard output variable syntax. When referencing an output variable in a step **inside a process template**, use the format: ```powershell Octopus.ProcessTemplate.Action[StepName].Output.PropertyName ```
When referencing an output variable in a step **outside a process template**, include the name of the process template usage step as it appears in the project. ```text Octopus.ProcessTemplate[ProcessTemplateUsageStepName].Action[StepName].Output.PropertyName ``` #### Example Consider a process template named **Build and Create Web App** containing a step that runs a script and publishes an output variable `FilePath`: ```hcl name = "Build and Create Web App" description = "" step "run-a-script" { name = "Collect Details" action { action_type = "Octopus.Script" ... } ... } ``` Reference the variable from another step **inside** the process template using: ```text Octopus.ProcessTemplate.Action[Collect Details].Output.FilePath ```
When this process template is used in a project with a process template usage step named **Create Web App**: ```hcl process_template "run-a-process-template" { name = "Create Web App" process_template_slug = "build-and-create-web-app" version_mask = "1.X" parameter "linux worker" { value = "WorkerPools-1" } ... } ``` Reference the variable from any other step in the process, which is **outside** the process template, using: ```text Octopus.ProcessTemplate[Create Web App].Action[Collect Details].Output.FilePath ``` :::div{.hint} Use the name of the process template usage step from the project, not the name of the process template itself, when referencing output variables outside a process template. ::: # Process Templates best practices Source: https://octopus.com/docs/platform-hub/templates/process-templates/best-practices.md > Best practices for creating process templates within Platform Hub This document uses **Producer** and **Consumer** frequently. To avoid confusion, use these definitions: - **Producer** - the user who creates and manages process templates in Platform Hub. - **Consumer** - the user who uses the process templates in their deployment or runbook processes. ## Process templates administration ### Establish a naming standard Use a [ Prefix ] - [ Template Name ] that is easy for everyone to understand the template's purpose. The [ Template Name ] should be succinct and informative. The [ Prefix ] should inform everyone where the template should be used. For example: - Deploy Process - [ Template Name ] for templates designed for deployments only. - Runbook - [ Template Name ] for templates designed for runbooks only. - Deploy and Runbook - [ Template Name ] for templates that can be used in deployments or runbooks. :::figure ![A list of process templates with the appropriate prefix and template name](/docs/img/platform-hub/process-templates/process-templates-overview-screen.png) ::: ### Establish what the major/minor/patch/pre-release means to your company Process templates' versioning provides hints: - Major (breaking changes) - Minor (non-breaking changes) - Patch (bug fixes) - Pre-Release There will be confusion unless you define what a breaking vs. non-breaking change means to you and your company. A starting point of a policy can be: - **Major** — The change requires the consumer to test it. The template changes how it fundamentally works, and it might delete existing parameters or add new ones. A consumer updating the template requires a PR in their project. - **Minor** — The change generally doesn’t require the consumer to test it extensively, but it might have added or removed a parameter, which will require a PR by the consumer because it changes the deployment process. - **Patch** No parameters were added or removed, and testing isn’t required (except by the consumer who reported the bug). The consumer doesn’t require a PR as the deployment process OCL doesn’t change. - **Pre-Release** - Use for changes that aren’t ready for general use. ### Leverage branch protection policies on the Platform Hub Git repository Template changes should occur in a branch and be reviewed via a PR. To test the changes, use the Pre-Release feature within the versioning functionality. ## Building Templates ### Opt for several smaller templates over "all-in-one" templates Creating a single process template containing all the steps required to deploy an application can be tempting. In practice, the "all-in-one" template falls apart at scale. #### Not all applications use the same components Consider this example: - Application #1 - Deploys a container to Kubernetes with a SQL Server Backend - Application #2 - Deploys a container to Kubernetes that monitors a queue No matter what, you will create two templates. - Option #1 - Create a template for each application combination - Kubernetes + SQL Server - Kubernetes + Queue - Option #2 - Create a template for each component - Kubernetes - SQL Server With option #2, you’ll have templates that can be mixed and matched with other application types. For example, the SQL Server template can be used for .NET apps running on Azure Web Apps, Linux, or Windows. #### Some applications require steps before or after templates Consider this deployment process: :::figure ![A deployment process using process templates with a step between two templates](/docs/img/platform-hub/process-templates/process-templates-requiring-steps-between-template-steps.png) ::: It uses three process templates, but they don’t all run back to back to back. Between the first process template, `Verify Build Artifacts`, and the second process template, `Deploy Databases`, a step to verify the infrastructure runs. Not all applications need that specific step to run between those templates. Having multiple process templates allows application teams to insert steps before or after specific actions are performed. #### A template for everybody is a template for nobody A large all-in-one template requires significant complexity to account for multiple use cases. We’ve seen all-in-one templates follow the same pattern: 1. The template starts out simple. 2. More use cases are encountered and additional steps are added. Steps solely focused on business logic and creating output variables become the norm. 3. Conditional run conditions for multiple steps become the default. The template becomes very brittle as people need to “hold it just right” for everything to work. 4. Conditional steps start to fail randomly, or steps are skipped randomly because of a configuration change. 5. Consumers are forced to update the templates repeatedly to fix the ever-growing list of bugs. 6. Consumers start asking for the ability to cherry-pick steps when running the template. Eventually, the template becomes unusable, and users want a complete rewrite or ask how they can get out of using the templates. ### Follow the single responsibility principle A template should have a single purpose. That doesn’t mean a single step, but a singular purpose that is easy for consumers to understand. Some examples include: - Template to create a database on a server - Template to destroy that database - Template to verify build artifacts - Template to deploy and verify an application on a Kubernetes cluster - Template to deploy a database change The template should include all the necessary steps to accomplish that task. Consider a template to deploy a database change. The company policy might be build a delta report and verify it before deploying. But DBAs don’t need to be bothered with every change, so only bother them when specific commands appear. For example, `Drop Database`. To accomplish that, the template would be: :::figure ![A process template that deploys databases with all the necessary steps to accomplish the task](/docs/img/platform-hub/process-templates/process-template-to-deploy-databases.png) ::: ### Parameters Parameters should be the only way for information to be sent from consumers' deployment and runbook processes to process templates. - Templates should not explicitly rely on output variables from other process templates. Template B shouldn’t have a hard-coded output variable from template A. - Templates should not hardcode the names of project variables or variable set variables. They must be sent in via parameters. - Any external references for items outside of Platform Hub - secrets, feeds, worker pools, tags - must be sent in via parameters. ### Templates must be self-contained A template should not expect other templates to be included in the deployment or runbook process. - Use parameters for all inputs. - The consumer, not the producer, should determine the order of the templates in the process. The consumer has context regarding the order of component deployments. - All steps needed to accomplish the template's task must be included. ### Keep consumer decision-making to a minimum The consumer shouldn’t need to worry about: - The number of steps required to accomplish a task - The steps that can be skipped based on a decision within the template (alerting a DBA if the Drop Database command is found) - The specific logic of how a task is accomplished. A consumer should be able to say: > I want to deploy to Kubernetes and verify the deployment using this information: > > - The container URL > - The Git repository of the manifest files > - The path to manifest files in that repository > - The target tag of the cluster to deploy to > - The verification script to run It is the producer's job to figure out how to takes those parameters and deploy the container to Kubernetes. ### Include notes for each step in the process template Notes help the consumer understand the intent behind each step. If a deployment fails, it is easier for them to self-service why the failure occurs if they have that context. Sometimes, a step name is all that is required to understand the intent. However, assuming everyone will understand the context based on the name alone is dangerous. It is better to include notes by default. :::figure ![Deploy process using process templates that leverage notes](/docs/img/platform-hub/process-templates/deploy-process-using-process-templates-with-notes.png) ::: ## Producer-managed script steps in templates A process template has two options for the `Run a script` step. 1. Consumer-managed script steps, such as running a verification step after a deployment. The producer will not know the necessary tests to run, so they will ask the consumer to provide the script for the tests. 2. Producer-managed script steps, such as creating a delta report from the provided database package and looking for dangerous commands. The script is created and managed by the producer to accomplish the goal of the template. This section refers to the latter, producer-managed script steps. ### Store the script inline with the template Since the template is already in version control, referencing another git repo for the script is redundant. If multiple templates need to reference the same script, that indicates they are doing too much. They likely aren’t following the above best practices (single responsibility principle, self-contained, etc.). ### Output variables intended for business decisions should only be used by the template It is common for a script step to make a set of decisions and create an output variable. For example, the first step in this template looks for dangerous commands in the migration scripts and sets an output variable when those commands are found. Those kinds of output variables should only be used by the template itself. :::figure ![A process template that makes a business decision in the first step and uses output variables in the next two steps](/docs/img/platform-hub/process-templates/process-template-to-deploy-databases.png) ::: ### Output variables must be surfaced via logs when intended for outside steps to use When a template explicitly creates output variables to be used by other steps they must be logged for the consumer to know. For example, a template that retrieves values from a key vault and sets them as output variables. - When possible, allow the consumer to provide a list of output variables - for example, telling the key vault which secrets to retrieve. - When that is not possible, log all the output variables created. - If another process template will use those output variables, they should be sent in as parameters. There should never be a hard link between templates. ### Log everything Octopus supports different levels of logs: - Write-Verbose - writes a verbose log (hidden by default) - Write-Info - writes an info log (visible on task log screen by default) - Write-Highlight - writes the information to the task summary screen - Write-Error - writes an error message Use those log levels and write messages frequently. This aids in debugging when a deployment or runbook run fails. Logs are like an umbrella - better to have it and not need it than need it and not have it. # Project templates Source: https://octopus.com/docs/platform-hub/templates/project-templates.md > An overview of project templates :::div{.warning} Project templates are in Alpha. The feature is incomplete and standard SLAs do not apply. Don't use it for production workloads. It is available to Enterprise customers on Cloud. Self-hosted customers can access it as an early preview via Octopus 2026.2. We're actively developing this feature and would love your feedback. ::: ## Overview Project templates are reusable project blueprints that can be shared across multiple spaces in Octopus Deploy. Instead of manually configuring each new project from scratch, defining deployment steps and variables every time, you create a single template that any space can use as a starting point. This ensures teams follow the same standards and removes the risk of configuration drift. To create or manage your project templates, navigate to the Platform Hub. If you haven't set up your Git repository, you must do so before creating a project template. 1. Navigate to **Project Templates** in Platform Hub. 2. Give the project template a **Name** and an optional **Description**. 3. Create your project template. :::figure ![Creating a project template with a name and description](/docs/img/platform-hub/project-templates/project-templates-onboarding.png) ::: After creating your template, Octopus adds the template's [folder and OCL files](#git-repository-structure) to your Git repository. If you've already created templates or are joining an existing team, you'll see the existing templates on the overview page. :::figure ![The Project Templates overview page](/docs/img/platform-hub/project-templates/project-templates-list.png) ::: You can now define the deployment process, parameters, and variables for the template. ## Deployment process The deployment process defines the steps Octopus orchestrates when deploying a project created from this template. Each project template has a single deployment process, and you can use Octopus's built-in steps, step templates, community step templates, and process templates to define it. Projects created from the template can't modify the deployment process. They can't add, remove, reorder, or disable steps. The only thing a project can configure is the parameter values explicitly exposed in the template, ensuring every project based on the template follows the same deployment process. Some steps behave differently inside the project template editor. Instead of letting you set a value directly, they ask for parameters or variables. Parameters are required when a step requires a resource that Platform Hub can't define, such as a Worker Pool, and that resource must be supplied by the project. These fields accept parameters so projects can provide the right values for their context. :::figure ![A step in a project template asking for a Worker Pool parameter](/docs/img/platform-hub/project-templates/project-templates-process-editor.png) ::: :::div{.hint} Unlike standard projects, project templates validate the deployment process when you publish, not when you commit. You can save an incomplete process and continue configuring parameters and variables before publishing. This will change once we add inline parameter and variable configuration to the deployment process editor. ::: :::div{.hint} If your deployment process includes a process template configured to auto-update on patch or minor versions, those updates flow through to templated projects automatically, even without you publishing a new version of the project template. This means two releases created on different days could use different versions of the process template, without anyone making any change to the project template or the project itself. We're interested in your [feedback](#feedback) on whether this behavior meets your expectations. ::: ## Parameters Parameters let you define the inputs a user must supply when they create a project from the template. They're the mechanism for making a template flexible. Rather than hardcoding values that differ between teams or spaces, you expose them as parameters. :::div{.warning} In the Alpha release, project templates don't support parameter scoping or sensitive parameter values. We're still working out how parameters, variables, and scoping should work in project templates and expect this to evolve throughout Alpha. We'd love your [feedback](#feedback). ::: For a full reference of supported parameter types and default values, see [Template parameters](/docs/platform-hub/templates/parameters). To create a parameter, navigate to the **Parameters** tab on your project template and add a new parameter. :::figure ![The Parameters tab in a project template](/docs/img/platform-hub/project-templates/project-templates-parameters.png) ::: ## Variables Variables in a project template work the same way as project variables in a standard Octopus project. Any variable you define is available to the deployment and can be selected in steps. Unlike parameters, users can't change the variables defined in a template when creating a project from it. Use variables for values that must be consistent across every project, like accounts. If you need users to provide their own value, expose it as a parameter instead. Variable values can reference parameters, letting you combine fixed template-level values with project-supplied inputs where needed. :::div{.warning} In the Alpha release, the variable types you can use are limited to text, sensitive, and resources currently available in Platform Hub, such as Accounts. Variable scoping is also not supported. We're adding support for additional resource types throughout Alpha. We'd love your [feedback](#feedback) on what you need. ::: :::figure ![The Variables tab in a project template](/docs/img/platform-hub/project-templates/project-templates-variables.png) ::: ## Git repository structure Octopus stores each project template as a folder in the Platform Hub Git repository. The folder name is a slug derived from the template name. Each folder contains four [OCL](/docs/projects/version-control) files: ```text project-templates// deployment_process.ocl parameters.ocl template.ocl variables.ocl ``` - **`template.ocl`** contains the template settings - **`deployment_process.ocl`** contains the deployment process steps - **`parameters.ocl`** contains the parameters defined for the template - **`variables.ocl`** contains the variables defined for the template Octopus stores published versions, sensitive variables, and space sharing configurations in the database, not in the Git repository. ## Committing, publishing, and sharing After you've configured your project template, see [Publishing and sharing templates](/docs/platform-hub/templates/publishing-and-sharing) for how to commit, publish, and share it. ## Using a project template After you publish and share a template, users in a space can create a new project from it. For details on creating and managing templated projects, see [Templated projects](/docs/platform-hub/templates/project-templates/using-project-templates). ## Feedback Project templates are in Alpha and we're actively shaping how the feature works. If you run into something unexpected or have thoughts on how parameters, variables, scoping, or anything else should work, we'd love to hear from you. [Share your feedback](https://oc.to/feedback) to help us build this the right way. # Templated projects Source: https://octopus.com/docs/platform-hub/templates/project-templates/using-project-templates.md > How to create and manage projects from a project template :::div{.warning} Project templates are in Alpha. The feature is incomplete and standard SLAs do not apply. Don't use it for production workloads. It is available to Enterprise customers on Cloud. Self-hosted customers can access it as an early preview via Octopus 2026.2. We're actively developing this feature and would love your feedback. ::: A **templated project** is a project created from a project template. It inherits the template's deployment process and variables, which you can't modify. You customize the project by supplying values for the parameters the producer has defined. ## Create a project from a template To use a project template, you create a new project based on it. 1. Select **Projects** from the main navigation and click **Add Project**. 2. If there are any available project templates, you'll see them listed here. Select the template you want to use. :::figure ![Selecting a project template when creating a new project](/docs/img/platform-hub/project-templates/project-template-selection.png) ::: 1. Give the project a **Name** and choose where its settings, non-sensitive variables, and template values will be stored. :::figure ![Naming a templated project and choosing storage settings](/docs/img/platform-hub/project-templates/templated-project-creation.png) ::: 1. Click **Next** to configure your versioning preferences. You can change these later in project settings. If the template has pre-release versions, you'll also be asked to choose a version type: - **Stable**: intended for production use. - **Pre-release**: intended for testing purposes only, typically by the template producers. Not recommended for production use. :::figure ![Choosing between stable and pre-release template versions](/docs/img/platform-hub/project-templates/templated-project-version-selection.png) ::: 1. Select how you want the project to handle template updates: - **Accept minor changes**: automatically updates when a patch or minor version is published. Major versions require a manual update. - **Accept patches**: only automatically updates when a patch is published. Minor or major versions require a manual update. :::figure ![Configuring template version update preferences](/docs/img/platform-hub/project-templates/templated-project-version-settings.png) ::: Click **Create Project**. You'll be taken to the **Template values** page. ## Template values Template values are the parameters the producer has defined to let you customize the template for your project. They work like project variables: you can provide a value directly or use variable substitutions. Parameters with defaults are marked as optional. :::figure ![The Template values page for a templated project](/docs/img/platform-hub/project-templates/templated-project-values.png) ::: After you've provided the required values, you can create a release as usual. :::div{.hint} You can't modify the deployment process in a templated project. You can't add, remove, reorder, or disable steps. If you need to change the process, contact the template producer. ::: ## Template updates When the producer publishes a new version of the template, you'll receive the update. How and when it's applied depends on the versioning preferences you set when creating the project: - **Patch and minor updates**: Octopus applies these automatically if you chose to accept them. - **Major updates**: you must manually apply these, regardless of your preferences. When a major update is available, you'll need to review and apply it before you can create new releases. ## Future direction We're still shaping what project templates can do, and there are a few areas we're actively thinking about. If any of these sound useful to you, we'd love to hear about it. [Share your feedback](https://oc.to/feedback). ## Limitations Some features are not yet supported for templated projects. For a full list, see [Troubleshooting](/docs/platform-hub/templates/project-templates/troubleshooting). # Installing the Alpha preview of project templates Source: https://octopus.com/docs/platform-hub/templates/project-templates/installation-guide.md > Guide for installing a preview version of Octopus Server with project templates :::div{.warning} Project templates are in Alpha. The feature is incomplete and standard SLAs do not apply. Don't use it for production workloads. It is available to Enterprise customers on Cloud. Self-hosted customers can access it as an early preview via Octopus 2026.2. We're actively developing this feature and would love your feedback. ::: ## License requirements Project templates require an Octopus Enterprise license with the project templates entitlement. Contact to confirm your license includes access before proceeding. ## How to install Octopus Server 2026.2 Project templates are available to Cloud customers without any additional setup. Self-hosted customers can access project templates as an early preview by installing the latest Octopus Server 2026.2. :::div{.warning} You should only install a preview version of Octopus Server if you are comfortable adopting a feature before it's fully complete. Any issues or bugs you encounter with preview features may take longer to fix than normal. For other features, we provide the same level of support as for LTS versions. Contact with any questions about whether this approach is right for you. ::: 1. Download Octopus Server 2026.2. - If you are running Octopus on Windows, you can [download the installer](https://download.octopusdeploy.com/octopus/Octopus.2026.2.2311-x64.msi) directly. - If you are running Octopus on Linux, you can pull the Docker image. The preview version is `2026.2.2311-PublicPreview` — see the [Docker Hub page](https://hub.docker.com/repository/docker/octopusdeploy/octopusdeploy/tags/2026.2.2311-PublicPreview/sha256-b81bd5d752b22f25137306086a4ea76168e9a7e17f7d573116c509c6bfa23469) for the full image details. 1. After downloading, upgrade your Octopus instance using the [upgrading guide](/docs/administration/upgrading). 1. Navigate to **Platform Hub** and select **Project Templates** to get started. :::div{.hint} Users must have **PlatformHubEdit** and **PlatformHubView** permissions to access Platform Hub. These permissions can only be assigned to system teams. By default, system administrators and system managers have both permissions enabled. ::: # Troubleshooting Source: https://octopus.com/docs/platform-hub/templates/project-templates/troubleshooting.md > Known issues and limitations for project templates :::div{.warning} Project templates are in Alpha. The feature is incomplete and standard SLAs do not apply. Don't use it for production workloads. It is available to Enterprise customers on Cloud. Self-hosted customers can access it as an early preview via Octopus 2026.2. We're actively developing this feature and would love your feedback. ::: ## Alpha limitations Project templates are in Alpha. The following features are not yet supported and are planned for future releases: - Channels - Lifecycles - Environments - Ephemeral environments - Cloud target discovery on steps. Use project variables in the project instead - Cloning a project template through the Octopus UI - Creating and managing project templates through the REST API, CLI, or Terraform provider - Feeds - Project settings - Runbooks - Triggers - Import and export of templated projects - Git Credentials - Inline variable and parameter configuration within the deployment process editor We'll update this page as the feature evolves. ## Step support Project templates support most Octopus steps. The following step package framework steps are not supported: - Deploy a Bicep Template - AWS S3 Create Bucket - AWS ECS These steps are being migrated away from the step package framework and will be supported in the future. ## Cloning project templates You can't clone a project template through the Octopus UI. To clone a template: 1. Copy the template's folder in the Platform Hub Git repository. 2. Rename the folder to the desired slug. 3. Update the template name inside `template.ocl`. 4. Commit and push the changes. 5. Refresh the project template list in the Octopus UI to find the newly created template. 6. Publish the template and configure the spaces that have access to it. ## Public API The Alpha release doesn't support creating and managing project templates through the REST API. We're planning REST API support for a future release. ## Losing access to an Octopus Enterprise license Project templates and all Platform Hub features require an Enterprise license. When you no longer have an Enterprise license, project templates behave differently. ### What will continue to work - Existing deployments can be redeployed. - Auto-scheduled deployments will continue to run. ### What will no longer work - Users will lose access to Platform Hub, including the ability to create and manage project templates. - You can no longer update or publish project templates. - You can no longer create new projects from project templates. # Project template best practices Source: https://octopus.com/docs/platform-hub/templates/project-templates/best-practices.md > Best practices for creating project templates in Platform Hub :::div{.warning} Project templates are in Alpha. The feature is incomplete and standard SLAs do not apply. Don't use it for production workloads. It is available to Enterprise customers on Cloud. Self-hosted customers can access it as an early preview via Octopus 2026.2. We're actively developing this feature and would love your feedback. ::: This document uses **Producer** and **Consumer** frequently. To avoid confusion, use these definitions: - **Producer**: the user who creates and manages project templates in Platform Hub. - **Consumer**: the user who creates projects from those templates. ## Project template administration ### Establish a naming standard Use a **[ Prefix ] - [ Template Name ]** convention that's easy for everyone to understand at a glance. The template name should be succinct and informative. The prefix should convey the intended use. For example: - **Project - [ Template Name ]** for general deployment project templates - **Service - [ Template Name ]** for templates designed for specific service types (APIs, background workers, etc.) ### Define what major, minor, patch, and pre-release mean for your organization Project template versioning provides hints: - **Major** (breaking changes) - **Minor** (non-breaking changes) - **Patch** (bug fixes) - **Pre-release** Without a shared definition of breaking vs. non-breaking, teams will interpret these differently. A starting point for a versioning policy: - **Major**: The change fundamentally alters how the template works. Parameters have been added or removed. Consumers need to review and test the update before accepting it. - **Minor**: Parameters may have been added or adjusted, which could require a change in the consuming project, but the core behavior is preserved. - **Patch**: No parameters were added or removed. Bug fixes only. Consuming projects can accept the update without a deployment process change. - **Pre-release**: Use for changes that aren't ready for general use. Share with a specific space to test before promoting. ### Use branch protection on the Platform Hub Git repository Template changes should happen in a branch and be reviewed via a pull request. Use the pre-release feature to test changes before promoting them to a stable version. ## Building templates ### Keep templates focused A project template should represent one clear type of project, not accommodate every variation your organization uses. If you find yourself adding conditional logic to handle different use cases, that's a sign you need more than one template. For example, a template for a Kubernetes microservice hosting an application and one that hosts a message queue may share similar infrastructure but have meaningfully different deployment processes. Separate templates are clearer for consumers and easier to maintain. ### Use parameters for everything consumer-specific Parameters are the only way information should flow from a project into the template. This means: - Don't hardcode space-specific values, account names, or resource identifiers in the template - Don't expect consumers to know internal variable names. Expose them as parameters - Any external reference such as secrets, feeds, worker pools, and target tags must come in via a parameter - Only expose parameters the consumer genuinely needs to provide. If the producer should control a value, use a variable instead ### Keep consumer decision-making to a minimum A consumer should be able to create a project from the template by supplying a small set of well-defined inputs. They shouldn't need to understand the internal mechanics of how the deployment works. A consumer should be able to say: > I want to deploy this service using these values: > > - The container image > - The target tag of the cluster to deploy to > - The connection string for the database It's the producer's job to figure out how to take those inputs and run a reliable deployment. ### Lock values that must be consistent across projects Template variables are fixed. Consumers can't override them. Use this for values that must be the same across every project created from the template, such as accounts. If you want consumers to supply their own value for something, expose it as a parameter instead. Variable values can reference parameters, letting you combine fixed template-level values with project-supplied inputs where needed. ### Include notes for each step in the deployment process Step notes help consumers understand what each step does and why. If a deployment fails, clear notes make it much easier for them to self-diagnose. Don't assume that a step name alone provides enough context. ## Publishing and versioning ### Test your template before publishing a stable version Before publishing a stable version, create a test project using the template in a development or sandbox space. Verify the deployment runs end-to-end with realistic parameter values using pre-release versions. ### Write release notes when publishing a new version Each published version can include release notes. Describe what changed, whether any parameters were added or removed, and what consumers need to do when updating. A concise, clear release note (for example, *Added a required parameter for the image pull secret. Update your project before creating a new release.*) saves consumers time and reduces support requests. # Projects Source: https://octopus.com/docs/projects.md > Projects gather together all the processes, releases, and runbooks for an application or service. In Octopus, you set up a project for each component you deploy – for example, each application, service, API, or database. In the image below, “Database”, “Product API”, and “Shopping Cart API” are the projects. Each project contains all the information to deploy that application, service, or database. :::figure ![Octopus Dashboard](/docs/img/getting-started/dashboard.png) ::: For each project, you can define: - A [deployment process](/docs/projects/deployment-process) - [Runbooks](/docs/runbooks) to manage your infrastructure - [Variables](/docs/projects/variables/) - The [environments](/docs/infrastructure/environments) where you'll deploy the software Each project has a single deployment process. The process used to deploy to your development environment is the same process used to deploy to your production environment. You can dive into the details of your projects from the bird's-eye view of your dashboard. For advice on how to work with Octopus projects, you can read our [project recommendations](/docs/projects/recommendations). :::figure ![Octopus Dashboard](/docs/img/projects/octopus-projects-list.png) ::: ## Project groups Project groups let you organize your projects and keep related components together. For example, you might have a project group for ‘Online Shop’ that has projects for the website, API, and database. :::figure ![Octopus Dashboard](/docs/img/projects/octopus-project-group.png) ::: Project groups are useful for organizing dashboards and finding related components. You can see the status of your deployments for each component at a glance in one place. You can also configure permissions at the project group level. ## Next steps Get started with the basics of [setting up a project](/docs/projects/setting-up-projects) and read our [project recommendations](/docs/projects/recommendations). Then, you can use the links below to add more functionality. ## Deployments and managing projects - [Deployment processes](/docs/projects/deployment-process) - [Exporting and importing projects](/docs/projects/export-import) - [Variables](/docs/projects/variables) - [Tenants](/docs/projects/tenants) - [Project triggers](/docs/projects/project-triggers) - [Coordinating multiple projects](/docs/projects/coordinating-multiple-projects) - [Configuration as Code](/docs/projects/version-control) ## Steps - [Steps](/docs/projects/steps) - [Built-in step templates](/docs/projects/built-in-step-templates) - [Community step templates](/docs/projects/community-step-templates) - [Custom step templates](/docs/projects/custom-step-templates) - [Update step templates](/docs/projects/updating-step-templates) # Releases Source: https://octopus.com/docs/releases.md > A snapshot of the deployment process and associated assets A release is a snapshot of the deployment process and the associated assets that existed when you created the release. These assets include scripts, references to package versions, and variables. :::figure ![Octopus releases overview](/docs/img/releases/octopus-releases-overview.png) ::: After you define your deployment steps and create a new release to snapshot the process, you then use that same process consistently across your environments. The process, scripts, package references, and variables remain consistent each time the release gets deployed. If you make changes to these assets, it won't affect releases you've already created. This is a crucial part of repeatable deployments. Each release gets assigned a version number. You can deploy releases as often as necessary, even if the deployment process has since changed for newer releases. Tenant variables aren't included in the release snapshot. This makes it easier to onboard new tenants without creating a new release. Changes to tenant variables are effective straight away. :::div{.success} To include common variables for a tenant, you must add the variable set in the tenant connected project. ::: ## Next steps Learn how to [create a release](/docs/releases/creating-a-release) and the role of [lifecycles](/docs/releases/lifecycles) and [channels](/docs/releases/channels) when releasing your software. # Supported Use Cases Source: https://octopus.com/docs/argo-cd/resources.md The Octopus and Argo CD integration supports a variety of application configurations. This page covers how each step behaves for different application shapes, and any constraints to be aware of. ## Constraints - Octopus updates content in the repositories referenced by your application. It does not update pinned `TargetRevisions` in your `Application.yaml`. - If your application specifies a constant `TargetRevision`, Octopus treats it as a branch and will fail to push back to your repository. - Helm sources that directly reference a chart from a Helm repository or OCI feed are read-only and can't be updated by Octopus. - If your application is represented as a Helm chart *in a directory*, Octopus can update the directory content in the application's repository. - Pull requests can be created for GitHub, GitLab, and Azure DevOps hosted repositories (e.g. \*.github.com, \*.gitlab.com). - Please [let us know](https://oc.to/roadmap-argo-cd) which other providers you would like to see supported. - Multiple source applications require Argo CD 2.14.0 or above (corresponding to the introduction of named sources in Argo CD). - Single source applications are supported in all versions of Argo CD. For details on how each step behaves, see: - [Update Argo CD Application Image Tags](/docs/argo-cd/steps/update-application-image-tags) - [Update Argo CD Application Manifests](/docs/argo-cd/steps/update-application-manifests) # Glob Pattern Cheat Sheet Source: https://octopus.com/docs/kubernetes/resources.md Patterns are always relative so start them with a file or folder name. eg: `my/folder/*.yml` and `**/dep.yml`. :::div{.warning} Directory separators should be forward slashes `/` for all platforms. Backslashes `\` only work when the server and worker are running on Windows. ::: :::div{.hint} Glob patterns cannot contain folders stemming from a root directory. eg: `/` and `C:\` Glob patterns cannot start with a relative path indicator. eg: `./` and `.\` The directory traversal path `../` is not supported. ::: `?` matches any single character in a file or directory name: ``` deployments/resource-?.yaml => deployments/resource-1.yaml, deployments/resource-g.yaml ``` `*` matches zero or more characters in a file or directory name: ``` deployments/*.yaml => deployments/anything-here.yaml, deployments/123-another-file.yaml */resource.yaml => deployments/resource.yaml, services/resource.yaml ``` `**` matches zero or more recursive directories: ``` **/resource.yaml => deployments/resource.yaml, services/resource.yaml, deployments/child-folder/resource.yaml ``` # Runbooks Source: https://octopus.com/docs/runbooks.md > Automate routine maintenance and emergency operations tasks Deployments are just one piece of the deployment puzzle. You also have to manage day-1 and day-2 operations. Octopus Runbooks lets you automate these routine and emergency operations tasks, giving you one platform for DevOps automation. A runbook is a set of instructions that help you consistently carry out a task, whether it's routine maintenance or responding to an incident. Octopus provides the platform for your runbooks just as it does for your deployments. Runbooks automate routine maintenance and emergency operations tasks, like: - Infrastructure provisioning - Database management - Website failover and restoration :::figure ![Octopus deployment process](/docs/img/runbooks/runbooks-screen.png) ::: Runbooks help you: - Make operations more repeatable and reliable - Let people self-serve without granting access to the underlying infrastructure - Automate tasks so you don't need human intervention - Free your teams for more crucial work ## How runbooks work You can set permissions so anyone on a team can start a runbook, or you can limit access to specific environments. Octopus handles access control and provides a complete audit trail. This makes runbooks ideal for creating safe and secure, self-service, push-button operations. This also frees up your Ops team from time-consuming, repetitive tasks. You can also use prompted variables with runbooks if you need human interaction, like a review. Because you don't need to grant access to the underlying infrastructure, reviews and approvals can happen in Octopus, too. This keeps the whole process in one place. The audit log stores changes to runbooks, requests to run them, and approvals, for complete transparency. ## Types of runbooks There are 3 common types of runbooks: - **Routine operations** - where you replace manual operations and ClickOps with runbooks. The goal is to move all toil into runbooks so you don’t need to remote into servers or click through cloud management portals. You can use Octopus to bring these tasks into one place and make them self-service or automatic. - **Emergency operations** - runbooks can reduce stress during an incident. You can have runbooks that restart a server, or perform a graceful failover to your standby region when there's a major incident. - **Infrastructure provisioning** - for elastic or transient environments, you can use runbooks to deploy templates for cloud infrastructure on Azure, AWS, or Google Cloud. Or, you might apply a Terraform template, destroy a Terraform resource, or create Terraform plans. Learn more about the ways you can use runbooks in our [runbooks examples](/docs/runbooks/runbook-examples). ## Getting started Runbooks belong to projects. To create or manage your runbooks, navigate to **Deploy ➜ Runbooks ➜ Add Runbook**. :::figure ![Add Runbook](/docs/img/runbooks/create-a-runbook.png) ::: ## Runbook tags \{#runbook-tags} :::div{.warning} Tagging runbooks is supported from Octopus version **2026.1.6552**. ::: You can apply tags to runbooks with custom metadata. This allows you to: - Organize runbooks by custom attributes to suit your team's needs. - Filter runbooks by tags to quickly find relevant automation tasks. - Group related runbooks together for easier management. :::div{.hint} Only tags from tag sets that have been configured with the **Runbook** scope can be used to tag runbooks. ::: Learn more about [tag sets](/docs/tenants/tag-sets), including tag set types, scopes, and how to create and manage them. ## Learn more - [Runbooks versus deployments](/docs/runbooks/runbooks-vs-deployments) - Learn how runbooks differ from deployments - [Runbooks permissions](/docs/runbooks/runbook-permissions) - Understand how to manage permissions - [Runbooks variables](/docs/runbooks/runbook-variables) - Learn how to manage variables - [Runbooks publishing](https://octopus.com/docs/runbooks/runbook-publishing) - Learn about snapshots for runbooks - [Running a runbook](/docs/runbooks/running-a-runbook) - Learn how to execute runbooks - [Scheduled runbook triggers](https://octopus.com/docs/runbooks/scheduled-runbook-trigger) - Define an unattended behavior for your runbook - [Runbook examples](https://octopus.com/docs/runbooks/runbook-examples) - Learn about the ways you can use runbooks # System for Cross-domain Identity Management (SCIM) Source: https://octopus.com/docs/security/authentication/scim.md [System for Cross-domain Identity Management](https://scim.cloud) is a standards-based approach used to allow identity providers to create, update and delete users and groups in other applications via an API. This makes it easier to provision and revoke user access to applications directly from the identity provider, as well as reduce the work involved in updating user details shared across systems, like names and email addresses. Octopus Deploy currently supports SCIM 2 as an Early Access feature of the Azure AD authentication provider. ## Benefits of SCIM - **Automated provisioning**: Users and groups are automatically created in Octopus Deploy when they're added to your identity provider. - **Automated deprovisioning**: Users are automatically deactivated in Octopus Deploy when they're removed from your identity provider. - **Synchronized updates**: User details like names and email addresses are automatically updated in Octopus Deploy when changed in your identity provider. - **Single source of truth**: Your identity provider becomes the authoritative source for user and group management. ## Supported identity providers - [Microsoft Entra ID](/docs/security/authentication/scim/configuring-microsoft-entra) ## Requirements - An Octopus Deploy license that includes the SCIM feature, such as an Enterprise license. - A configured authentication provider that supports SCIM. - Network access to allow inbound HTTPS API requests from your identity provider to Octopus Deploy. # Security Source: https://octopus.com/docs/security.md We pride ourselves on making Octopus Deploy a secure product. The security and integrity of your Octopus Deploy installation is the result of a partnership between us as the software vendor, and you as the host and administrators of your installation. This section provides information about the responsibility we take to provide a secure software product, and considerations for you as the host and administrator of your Octopus Deploy installation. ## Our Certifications Octopus Deploy is compliant with cybersecurity standards such as ISO27001 & SOC II. Every year Octopus undergoes a security review conducted by a third-party company. We also run several public bug bounty programs to encourage the security community to help us keep our customers safer. We are an active member of MITRE through its CVE program as a CNA, meaning that we're responsible for disclosing any vulnerabilities in our product to allow customers security teams to make informed decisions. A comprehensive overview of our security controls is available in our [Trust Centre](https://trust.octopus.com) where it is possible to request access to our certifications and penetration test reports, as well as other supporting documents and policies. We often hear from customers who want to know more about our security posture. We've performed a [self assessment against various industry-standard controls](/docs/security/caiq). Feel free to use this in any vendor assessments you need to perform. ## Responsibility Octopus Deploy has the responsibility of providing a secure and stable platform for managing your deployments. You have the responsibility for how that platform is implemented and exposed to your infrastructure and users. :::figure ![A diagram depicting the shared responsibility model for Octopus Deploy](/docs/img/security/shared-responsibility.png) ::: ### Octopus Cloud If you are using [Octopus Cloud](https://octopus.com/cloud), where we host your Octopus Server on your behalf, we take additional responsibility for the security and integrity of the Octopus Server. In this case, you are responsible for: - How you connect Octopus to your infrastructure. - How you identify your users and control their activities within Octopus. - How you handle sensitive information within Octopus. ### Self-hosted If you are hosting the Octopus Server yourself, you take responsibility for the security and integrity of the Octopus Server. In this case, you also taking responsibility for: - How you harden the underlying server operating system. - How you protect the Octopus Server files on the operating system. - How you store files generated by Octopus Server. - How you secure your SQL Database and protect the data generated by Octopus Server. - How you expose your Octopus Server to your infrastructure. - How you identify your users and control their activities within Octopus. - How you handle sensitive information within Octopus. ## Built in to Octopus Deploy ### Data encryption Octopus Deploy encrypts any data which we deem to be sensitive. You can also instruct Octopus Deploy to encrypt sensitive variables which can be used as part of your deployments. Learn about [data encryption](/docs/security/data-encryption/) and [sensitive variables](/docs/projects/variables/sensitive-variables). ### Secure communication Your Octopus Server communicates with the machines you configure as targets for your deployments using transport encryption and tamper proofing techniques. Learn about [secure communication](/docs/security/octopus-tentacle-communication). ### Auditing Arguably one of the most appreciated features in Octopus Deploy is our support for detailed auditing of important activity. Learn about [auditing](/docs/security/users-and-teams/auditing). ### Prevention of common vulnerabilities and exploits To make Octopus Deploy useful to your organization it needs a high level of access to your servers and infrastructure. We take great care to understand common vulnerabilities and exploits which could affect your Octopus Deploy installation, and ensure our software prevents anyone from leveraging these. ### FIPS compliance We take every reasonable effort to make Octopus Server, Tentacle, Calamari, and any other tools we provide FIPS 140 compliant. If something is not FIPS 140 compliant we will take every reasonable effort to fix the problem, or otherwise degrade the feature gracefully. Learn about [FIPS and Octopus Deploy](/docs/security/fips-and-octopus-deploy). ## Provided by the host The following sections describe the responsibilities taken by whomever is hosting your Octopus Server. If you are using Octopus Cloud, that's us. If you are self-hosting, this is you. ### Safely exposing your Octopus Deploy installation In many scenarios you will want to expose parts of your Octopus Deploy installation to external networks. You should take care to understand the security implications of exposing your Octopus Deploy installation, and how to configure it correctly to prevent unwanted guests from accessing or interfering in your deployments. Learn about [safely exposing Octopus Deploy](/docs/security/exposing-octopus). ### Safely executing scripts on the Octopus Server To make Octopus as useful as possible after installation, you can perform many kinds of deployments without setting up other infrastructure. We achieve this using a concept called a worker, and in a default installation, this is called the built-in worker. Depending on your scenario, this can have a big impact on the security and integrity of your Octopus Server. Learn about [configuring workers](/docs/infrastructure/workers). ## Provided by your Octopus administrators The following sections describe the security controls you can put in place when managing your Octopus Server regardless of where it is hosted. ### Identity and access control Before a person can access your Octopus Deploy installation, they must validate their identity. We provide built-in support for the most commonly used authentication providers including Active Directory (NTLM and Kerberos), Google Apps, and Microsoft Azure Active Directory. Octopus Deploy works natively with Open ID Connect (OIDC) so you can connect to other identity providers. If you don't want to use an external identity provider, you can let Octopus Deploy securely manage your usernames and passwords for you. Learn about [authentication providers](/docs/security/authentication). Once a person has verified their identity, you can control which activities these users can perform. Learn about [managing users and teams](/docs/security/users-and-teams). ### HTTP security headers You can configure the Octopus Server to send certain standard HTTP security headers with each HTTP response. The Octopus Server will be secure by default, however you can enable certain advanced HTTP security headers, like HSTS if you desire. Learn about [HTTP security headers](/docs/security/http-security-headers). ## PCI/DSS compliance We have a lot of customers running Octopus Deploy in their PCI-compliant environments. We don't claim to be experts in PCI compliance, especially since every situation is slightly different. What we can do is offer some recommendations primarily focused on your use of Octopus Deploy and different models you can achieve with it. Learn about [PCI/DSS compliance and Octopus Deploy](/docs/security/pci-compliance-and-octopus-deploy). ## Outbound requests Some components in Octopus Deploy will make outbound requests from time to time. Generally these requests are required to perform your deployments, some of them are for things like certificate revocation checks, and some are designed to help us build a better product for you. Learn about the [outbound requests](/docs/security/outbound-requests) made by Octopus Deploy. ## Privacy Learn about our [privacy policy](https://octopus.com/privacy). We are currently preparing for the General Data Protection Regulation (GDPR) to be ready ahead of the 25 May 2018 enforcement date. ## Security disclosure policy {#disclosure-policy} No software is ever bug free, and as such, there will occasionally be security issues. Once we have fixed a verified security vulnerability we follow a practice of [responsible disclosure](https://en.wikipedia.org/wiki/Responsible_disclosure). You can view the entire list of disclosed security vulnerabilities in the [MITRE CVE database](https://www.cvedetails.com/vulnerability-list/vendor_id-16785/product_id-39115/Octopus-Octopus-Deploy.html). Learn about our [security disclosure policy](https://octopus.com/security/disclosure). ## Contact us If you have a concern regarding security with Octopus Deploy, or would like to report a security vulnerability, please send an email to [security@octopus.com](mailto:security@octopus.com). For security vulnerabilities, please include as much information as possible, with full details about how to reproduce and validate the vulnerability, preferably with a proof of concept. If you wish to encrypt your report, please use our [PGP key](https://octopus.com/pgp-key.pub). Please give us a reasonable amount of time to correct the issue, before making it public. We will respond to your report within one business day. # Self-Hosted Octopus Source: https://octopus.com/docs/best-practices/self-hosted-octopus.md This section covers our recommendations and implementation guides for our customers who wish to self-host Octopus Deploy on their infrastructure. The topics covered are: - [Installation Guidelines](/docs/best-practices/self-hosted-octopus/installation-guidelines) - [High Availability](/docs/best-practices/self-hosted-octopus/high-availability) # SQL Database Source: https://octopus.com/docs/installation/sql-database.md ## SQL Database requirements Octopus Deploy requires a Microsoft SQL Server database to store configuration and history. Octopus works with a wide range of versions and editions of SQL Server, from a local SQL Server Express instance, all the way to an Enterprise Edition [SQL Server Failover Cluster](https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/high-availability-solutions-sql-server) or [SQL Server AlwaysOn Availability Group](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server), or even one of the hosted database-as-a-service offerings. Octopus supports versions of SQL Server that have at least 2 years of active support remaining from Microsoft. Versions approaching or past end-of-support are not supported. ### SQL Server hosting options SQL Server can be hosted on [Linux](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-overview) (including in a [container](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-deployment)), [Windows](https://learn.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server), or in one of many managed offerings from Cloud Providers. The requirements are: - Must be running SQL Server 2016+ or Azure SQL - Must be located in the same data center as the servers/container hosts that host Octopus Deploy. Below are some configuration guidelines for various options: - [Self-managed on Linux or Windows](/docs/installation/sql-database/self-managed-sql-server) - [AWS RDS](/docs/installation/sql-database/aws-rds) - [Azure SQL](/docs/installation/sql-database/azure-sql) - [GCP SQL](/docs/installation/sql-database/gcp-cloud-sql) Supported editions: - Express (free) - Web - Datacenter - Standard - Enterprise - Microsoft Azure SQL Database - AWS RDS SQL Database :::div{.warning} **Warning:** Octopus does not support database mirroring or SQL Server replication. Having these features turned on may cause errors during configuration. [More information](/docs/administration/data#high-availability). ::: ### Legacy Octopus version requirements The following table outlines the minimum SQL Server version required by older Octopus Server releases. | Octopus Server | Minimum SQL Server version | Azure SQL | | ----------------- | -------------------------- | --------- | | 2020.2.x ➜ latest | SQL Server 2016+ | Supported | | 3.0 ➜ 2019.13 | SQL Server 2008+ | Supported | ## Using SQL Server Express \{#SQLServerDatabaseRequirements-UsingSQLServerExpress} The easiest and cheapest way to get started is with [SQL Server Express](https://oc.to/downloadsqlserverexpress) and install the Octopus Server and SQL Server Express side-by-side on your server. This is a great way to test Octopus for a proof of concept. Depending on your needs, you might decide to use SQL Server Express, or upgrade to another supported edition. ## Database administration and maintenance For more information about maintaining your Octopus database, please read our [database administrators guide](/docs/administration/data/octopus-database). ## Learn more - [Octopus installation](/docs/installation) # Kubernetes Steps Source: https://octopus.com/docs/kubernetes/steps.md You can use the built-in Kubernetes steps to configure Kubernetes deployments. There are also [community-contributed Kubernetes step templates](https://octopus.com/integrations/kubernetes) available. | Step | Description | |-----------------------------------------------------------------------------------------|-----------------------------------------------------------------| | [Deploy Kubernetes YAML](/docs/kubernetes/steps/yaml) | Deploy to Kubernetes using YAML | | [Deploy a Helm chart](/docs/kubernetes/steps/helm) | Deploy to Kubernetes using a Helm chart | | [Deploy with Kustomize](/docs/kubernetes/steps/kustomize) | Deploy to Kubernetes with Kustomize | | [Configure and apply Kubernetes resources](/docs/kubernetes/steps/kubernetes-resources) | Creates a Kubernetes deployment, service, and ingress resources | | [Configure and apply a Kubernetes Service](/docs/kubernetes/steps/kubernetes-service) | Creates a Kubernetes service resource | | [Configure and apply a Kubernetes Ingress](/docs/kubernetes/steps/kubernetes-ingress) | Creates a Kubernetes ingress resource | # Support Source: https://octopus.com/docs/support.md To enquire about purchasing, renewing, or upgrading your Octopus license, please [contact our sales team](https://octopus.com/company/contact). If you need product help and have a paid license, or you're trialing Octopus, please [email our support team](mailto:support@octopus.com). You can also visit our [Support page](https://octopus.com/support). Sometimes when you contact support, we might ask you to perform tasks in Octopus. This section explains how to perform some of those tasks. :::div{.hint} Premium Support is available as an addition to your Octopus Enterprise license. For more information, please see our [Premium Support page](https://octopus.com/support/priority). ::: # Kubernetes Targets Source: https://octopus.com/docs/kubernetes/targets.md To deploy your application to a Kubernetes cluster, you need Octopus Deploy to know that the cluster exists and how to access it. The cluster is your deployment destination. To represent deployment destinations, Octopus uses [deployment targets](/docs/infrastructure/deployment-targets) (a virtual entity). There are two different deployment targets for deploying to Kubernetes, the [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent) and the [Kubernetes API](/docs/kubernetes/targets/kubernetes-api) targets. The Kubernetes API target allows the Octopus Server to connect to a cluster via the API. In this scenario, your deployment tasks run outside of a cluster, typically on a worker. The Kubernetes agent target requires the installation of a small executable in a cluster (agent). Octopus Server connects to the agent for deployments. In this scenario, your deployment tasks run inside the cluster. :::figure ![Kubernetes agent and Kubernetes API diagram](/docs/img/infrastructure/deployment-targets/kubernetes/diagram-kubernetes-targets.png) ::: The following table summarizes the key differences between the two targets. | | [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent) | [Kubernetes API](/docs/kubernetes/targets/kubernetes-api) | | :--------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------- | | Connection method | [Polling agent](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) in cluster | Direct API communication | | Setup complexity | Generally simpler | Requires more setup | | Security | No need to configure firewall
No need to provide external access to cluster | Depends on the cluster configuration | | Requires workers | No | Yes | | Requires public IP | No | Yes | | Requires service account in Octopus | No | Yes | | Limit deployments to a namespace | Yes | No | | Planned support for upcoming observability features | Yes | No | | Recommended usage scenario |
  • For deployments and maintenance tasks (runbooks) on Kubernetes
  • If you want to run a worker on Kubernetes (to deploy to other targets)
| If you cannot install an agent on a cluster | | Step configuration | Simple (you need to specify target tag) | More complex (requires target tags, workers, execution container images) | | Maintenance |
  • Upgradeable via Octopus Server
  • No need to add and manage
credentials |
  • You need to update/rotate credentials
  • Requires worker maintenance updates
| # Tasks Source: https://octopus.com/docs/tasks.md > The task view shows waiting, running, and completed tasks Many of the main operations Octopus performs are represented by Tasks. This includes all deployments and runbook runs, and system operations such as applying retention policies. Since Tasks consume resources on the Octopus Server while they are executing, the number of Tasks which can execute at the same time is limited by a task cap. See [increasing the task cap](/docs/support/increase-the-octopus-server-task-cap) for more information. # Templates Source: https://octopus.com/docs/platform-hub/templates.md > Reusable templates for processes and projects in Platform Hub ## Overview Platform Hub provides two types of templates that help you standardize and share your deployment configuration across spaces. **[Process templates](/docs/platform-hub/templates/process-templates)** are reusable sets of deployment steps. Instead of copying and pasting deployment processes across projects, you define the steps once and share them. Teams can opt into using a process template within their deployment process, and can remove it if it no longer fits their needs. **[Project templates](/docs/platform-hub/templates/project-templates)** give platform engineering teams a way to define a golden path for how projects are structured. The deployment process is defined by the template and can't be modified in projects based on it. Through parameters exposed by the platform team, project teams can deploy using their own packages, accounts, worker pools, and target tags, while the underlying process stays consistent across every project based on the template. Project templates are currently in Alpha, see the [installation guide](/docs/platform-hub/templates/project-templates/installation-guide) to get started. Both template types use [parameters](/docs/platform-hub/templates/parameters) to expose configurable inputs, letting you define what a project must supply while keeping the core template consistent across teams and spaces. ## Prerequisites Before creating any template, you must configure a Git repository in [Platform Hub](/docs/platform-hub). This repository stores your templates as code (OCL files) and is the single source of truth for all template changes. # Template parameters Source: https://octopus.com/docs/platform-hub/templates/parameters.md > A reference for parameters in Platform Hub templates ## Overview Parameters make it easy to reuse the same template across projects while tailoring the inputs to each project's needs. Rather than hardcoding values that differ between teams or spaces, you expose them as parameters that each project supplies. For process templates, parameters are supplied at the process template usage step level and applied during deployment or runbook runs. For project templates, parameters are set when configuring the project and define the values the template requires, such as accounts, worker pools, and target tags. Templates can manage the following as parameters: - AWS Account - Azure Account - Certificate - Channels - Checkbox - Container Feed - Dropdown - Environments - Generic OIDC Account - Google Cloud Account - Multi-line text box - Sensitive/password box - Single-line text box - Target Tags - Teams - Tenant Tags - Username Password Account - Worker Pool - Package - Project - A previous step name To create a parameter, navigate to the **Parameters** tab on a template and add a new parameter. ## Parameter values You can set an optional default value for these parameters: - Single-line text - Multi-line text - Dropdown - Checkbox - Sensitive/password box (process templates only) - AWS Account - Azure Account - Generic OIDC Account - Google Cloud Account - Username Password Account You cannot set a default value for these parameters, they must be set inside a project: - Certificate - Worker Pool - Package - A previous step name - Target Tags - Teams - Tenant Tags - Environments - Container Feed - Channels - Project ## Template-specific behavior Some parameter behavior differs between template types. :::div{.hint} **Process templates** support sensitive parameter defaults and account parameter scoping. For more information, see [Process template parameters](/docs/platform-hub/templates/process-templates#parameters). ::: :::div{.hint} **Project templates** do not support parameter scoping or sensitive parameter values in the Alpha release. The following parameter types are not available for project templates: Multi-line text, Dropdown, Checkbox, and Sensitive/password box. For more information, see [Project template parameters](/docs/platform-hub/templates/project-templates#parameters). ::: # Publishing and sharing templates Source: https://octopus.com/docs/platform-hub/templates/publishing-and-sharing.md > How to save, publish, and share templates in Platform Hub ## Saving a template Once you've finished making changes, commit them to save to your Git repository. You can either **Commit** with a description or quick commit without one. :::figure ![The commit experience for a template](/docs/img/platform-hub/process-templates-commit-experience.png) ::: ## Publishing a template After committing your changes, publish the template to make them available. You have three options when publishing: - **Major** changes (breaking) - **Minor** changes (non-breaking) - **Patch** (bug fixes) You can also publish a pre-release version to test the template before promoting it. :::div{.hint} The first time you publish a template, you can only publish a major or pre-release version. ::: Selecting any option increments the version number following [Semantic Versioning](https://semver.org). For minor or patch updates, projects that accept these changes will automatically upgrade to the newly published version. :::figure ![The publish experience for a template](/docs/img/platform-hub/process-templates-publishing.png) ::: ### Pre-releases If you want to test your changes before publishing a major, minor, or patch version, you can mark a template as a pre-release version. :::figure ![Marking a template as pre-release](/docs/img/platform-hub/process-template-prerelease.png) ::: ## Sharing a template You must share a template before any space can use it. Templates can be shared with all current and future spaces, or with a select few. :::div{.hint} Sharing settings can be updated at any time. ::: :::figure ![The sharing experience for a template](/docs/img/platform-hub/process-template-sharing.png) ::: # Tenants Source: https://octopus.com/docs/tenants.md > Deploy to many production instances, targets, or customers without duplication Tenants in Octopus help you deploy software to many production instances, targets, or customers without duplicating effort. This includes: - Delivering Software as a Service (SaaS) applications where each customer has its own resources - Deploying to physical locations like stores, hospitals, or data centers - Dealing with multiple cloud regions Although you can model these scenarios using multiple projects, or multiple environments, this can quickly become overwhelming. These models also don’t scale well as there is a lot of duplication. [Tenants](https://octopus.com/features/tenants) let you easily create customer or location-specific deployment pipelines without duplicating project configuration. You can manage separate instances of your application in multiple environments – all from a single Octopus project. This allows you to define one process and easily deploy to any number of tenants. :::figure ![](/docs/img/tenants/images/octopus-tenants-deployments.png) ::: Tenants let you: - Deploy multiple instances of your project to the same [environment](/docs/infrastructure/environments). - Manage configuration settings unique to each tenant. - Promote releases using safe, [tenant-aware lifecycles](/docs/tenants/tenant-lifecycles). - Use [tenant tags](/docs/tenants/tenant-tags) to tailor the deployment process and manage large groups of tenants. - Deploy to shared or dedicated [infrastructure](/docs/tenants/tenant-infrastructure) per tenant. - Limit access to tenants by [scoping team roles](/docs/tenants/tenant-roles-and-security) to tenants. - Create release rings to easily deploy to alpha and beta tenants. - Build simple [tenanted deployment](https://octopus.com/use-case/tenanted-deployments) processes that can scale as you add more tenants. ## When to use tenants {#when-to-use-tenants} Tenants simplify complex deployments if you're deploying your application more than once in an environment. Consider using tenants if: - You need to deploy different versions of your application to the same environment. - You're creating multiple environments of the same type. This could be multiple test environments for different testers, or multiple production environments for different customers. You don't need tenants in every deployment scenario. If you don't deploy multiple instances of your software, and don't have unique needs like features, branding, or compliance, you may not need tenanted deployments. Check out our [multi-tenancy guides](https://octopus.com/docs/tenants/guides) for more detail on how to use tenanted deployments in Octopus for common scenarios. ## Types of tenants {#types-of-tenants} While it’s common to use tenants to represent the customers of your application, there are many more ways you can use tenants. Tenants can also represent: - Geographical regions or data centers - Developers, testers, or teams - Feature branches Learn more about [tenant types](https://octopus.com/docs/tenants/tenant-types). ## Create your first tenant {#create-your-first-tenant} It’s simple to configure a new or existing Octopus project to use the Tenants feature: 1. [Create a tenant](/docs/tenants/tenant-creation) 2. [Enable tenanted deployments](/docs/tenants/tenant-creation/tenanted-deployments) 3. [Connect a tenant to a project](/docs/tenants/tenant-creation/connecting-projects) ## Tenant variables {#tenant-variables} You often want to define different variable values for each tenant, like database connection settings or a tenant-specific URL. If you use an untenanted project, you’ll have previously defined these values in the project itself. But with a tenanted project, you can set these values directly on the tenant for any connected projects. ### Tenant-provided variables are not snapshotted When you [create a release](/docs/octopus-rest-api/octopus-cli/create-release/) in Octopus Deploy, we take a snapshot of the deployment process and the current state of the [project variables](https://octopus.com/docs/projects/variables). However, we do not take a snapshot of tenant variables. This lets you add new tenants at any time and deploy to them without creating a new release. This means any changes you make to tenant variables take immediate effect. Learn more about [tenant variables](/docs/tenants/tenant-variables) in our documentation. ## Tenant tags {#tenant-tags} Tenant tags help you to classify your tenants using custom tags that meet your needs, and tailor tenanted deployments for your projects and environments. Learn more about [tenant tags](/docs/tenants/tenant-tags) in our documentation. ## Troubleshooting tenanted deployments If you’re having any issues with tenants, we have useful answers to common questions about tenanted deployments in Octopus: - [Multi-tenant deployments FAQ](/docs/tenants/tenant-deployment-faq) - [Troubleshooting multi-tenant deployments](/docs/tenants/troubleshooting-multi-tenant-deployments) If you still have questions, [we’re always here to help](https://octopus.com/support). # Tenants Source: https://octopus.com/docs/projects/tenants.md [Tenants](/docs/tenants) in Octopus allow you to easily create customer specific deployment pipelines without duplicating project configuration. :::figure ![](/docs/img/projects/tenants/project-tenants-page.png) ::: From a project's Tenants page, you can see and manage the tenants that are connected to your project. You can edit the environments a tenant is connected to, disconnect a tenant, or use the [bulk tenant connection wizard](/docs/projects/tenants/bulk-connection) to connect hundreds or thousands of tenants to your project in one operation. ## Older versions The Project Tenants page and bulk tenant connection wizard are available from Octopus Deploy **2023.3** onwards. # Troubleshooting Source: https://octopus.com/docs/infrastructure/workers/kubernetes-worker/troubleshooting.md For troubleshooting common issues, please refer to the Kubernetes Agent [troubleshooting page](/docs/kubernetes/targets/kubernetes-agent/troubleshooting), as the Agent and Worker are based on the same underlying technology. # Troubleshooting Source: https://octopus.com/docs/kubernetes/live-object-status/troubleshooting.md This page will help you diagnose and solve issues with Kubernetes Live Object Status. ## Installation \{#installation} ### The Kubernetes monitor can't connect gRPC port 8443 Some firewalls may prevent the applications from making outbound connections over non-standard ports. If this is preventing the Kubernetes monitor from connecting to your Octopus Server, configure your environment to allow outbound connections. For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443. If using a port other than 8443, ensure the Kubernetes monitor's `server-grpc-url` parameter has been updated to match. If you haven't enabled Octopus Server's gRPC port before, the port Octopus Server uses can be [changed from the command line](/docs/octopus-rest-api/octopus.server.exe-command-line/configure/) using the `--grpcListenPort` option. :::div{.info} Support for running the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) with high availability Octopus clusters was added in v2025.4 ::: ### gRPC connections via a load balancer Octopus generates a self-signed certificate for gRPC communications like those between Octopus and Kubernetes monitor and requires specific configuration. Refer to the [load balancer documentation](/docs/installation/load-balancers#grpc-services) for further information. ### Certificate errors when trying to create gRPC connections The self-signed certificate is only useful for simple scenarios where the Kubernetes monitor can talk directly to Octopus server (or is proxied with TLS passthrough). Refer to the [agent installation docs](/docs/kubernetes/targets/kubernetes-agent#grpc-certificates) for more options when using custom certificates. ## Runtime ### Failed to establish connection with Kubernetes Monitor \{#failed-to-establish–connection-with-kubernetes-monitor} Some actions, such as logs and events, require per request communication with the Kubernetes monitor running in your cluster. If the Kubernetes monitor cannot be accessed, follow these steps to determine why: 1. Confirm that the Kubernetes monitor is connected by reviewing the `Kubernetes monitor Status` on the Connectivity page of your Kubernetes agent 2. Confirm that the Kubernetes monitor pod is running on your cluster. This pod is located in the same namespace that the Kubernetes agent is installed in, normally named starting with `octopus-agent-` 3. Confirm that the Kubernetes monitor pod logs report no errors. If the logs indicate failure, please confirm that connectivity to your Octopus server instance has not changed and reach out to support for assistance. In almost all cases, we have found restarting the Kubernetes monitor pod will re-establish connection if there are no external factors at play. Please reach out to support if you are finding cases of repeated, unexpected failure. ### We couldn't find a Kubernetes monitor associated with the deployment target \{#kubernetes-monitor-not-found} Similar to the [error above](/docs/kubernetes/live-object-status/troubleshooting#failed-to-establish–connection-with-kubernetes-monitor), however more severe. This error will be shown when Octopus fails to find the registration of a Kubernetes monitor at all. If the Kubernetes agent and monitor are both still running in your Kubernetes cluster, this means the Kubernetes monitor will need to be re-registered with Octopus. The cleanest way to do this is to delete and re-install your Kubernetes agent entirely. If there are no deployments currently running on the agent, this is a safe operation that will not affect future deployments. If deleting your Kubernetes agent is not an option for your use case, you can also delete the Kubernetes monitor's authentication secret and restart the Kubernetes monitor pod to trigger re-registration. The authentication secret lives in the same namespace that your Kubernetes agent was installed in and has a name similar to `-kubernetesmonitor-authentication`. ## Unexpected object statuses ### Out of date or slow to update object statuses Kubernetes Live Object Status deals with potentially large and unbounded quantities of data. In the case of some deployments and workloads, very frequent updates as well. As a safe guard to ensure that your Octopus Server instance remains free from interference from this new feature, we have conservative rate limits in place to reduce load spikes during larger work loads. As we progress through the early access period, we will open up limitations and increase the ceiling of how many clusters and resources can be monitored. The rate limit is not a hard stop to messages being sent between Octopus Server and the Kubernetes monitor. Instead we are slowing messages down to better handle burst-y traffic. ### Why is an object out of sync? \{#why-is-an-object-out-of-sync} Objects are reported out of sync when the manifest the Kubernetes cluster sends back to use does not match the one that Octopus applied in your deployment. This can happen for a number of reasons, including - Someone has made an update to the object outside of Octopus deployments - A controller is automatically making changes to the object on your cluster - There are additional fields that Kubernetes does not recognize in the applied manifest that Kubernetes automatically removes from the reported live manifest If possible, we recommend ensuring that - Octopus is the only entity to modify your deployments - You craft your Kubernetes manifests to ensure that there are no invalid fields # Troubleshooting Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/troubleshooting.md This page will help you diagnose and solve issues with the Kubernetes agent. ## Installation Issues ### Helm command fails with `context deadline exceeded` The generated helm commands use the [`--atomic`](https://helm.sh/docs/helm/helm_upgrade/#options) flag, which automatically rollbacks the changes if it fails to execute within a specified timeout (default 5 min). If the helm command fails, then it may print an error message containing context deadline exceeded This indicates that the timeout was exceeded and the Kubernetes resources did not correctly start. To help diagnose these issues, the `kubectl` commands [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/) and [`logs`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/) can be used *while the helm command is executing* to help debug any issues. #### NFS CSI driver install command ```bash kubectl describe pods -l app.kubernetes.io/name=csi-driver-nfs -n kube-system ``` #### Agent install command ```bash # To get pod information kubectl describe pods -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] # To get pod logs kubectl logs -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] ``` Replace `[NAMESPACE]` with the namespace in the agent installation command. If the Agent install command fails with a timeout error, it could be that: - There is an error in the connection information provided - The bearer token or API Key has expired or has been revoked - The agent is unable to connect to Octopus Server due to a networking issue - (if using the NFS storage solution) The NFS CSI driver has not been installed - (if using a custom Storage Class) the Storage Class name doesn't match #### Setting scriptPod Service Account annotations To add an annotation to the Service Account for the `scriptPods`, use the following syntax ```bash --set scriptPods.serviceAccount.annotations.""="" ``` **Note:** If the annotation name contains a `.`, you will need to JSON escape it (`\.`). Below is an example of setting the role-arn annotation for an EKS cluster where the annotation name is `eks.amazonaws.com/role-arn`. ```bash --set scriptPods.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam:::role/" ``` ## Script Execution Issues ### `Unexpected Script Pod log line number, expected: expected-line-no, actual: actual-line-no` This error indicates that the logs from the script pods are incomplete or malformed. When scripts are executed, any outputs or logs are stored in the script pod's container logs. The Tentacle pod then reads from the container logs to feed back to Octopus Server. There's a limit to the size of logs kept before they are [rotated](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) out. If a particular log line is rotated before Octopus Server reads it, then it means log lines are missing - hence we fail the deployment prevent unexpected changes from being hidden. ### `The Script Pod 'octopus-script-xyz' could not be found` This error indicates that the script pods were deleted unexpectedly - typically being evicted/terminated by Kubernetes. If you are using the default NFS storage however, then the script pod would be deleted if the NFS server pod is restarted. Some possible causes are: - being evicted due to exceeding its storage quota - being moved or restarted as part of routine cluster operation ### `Pod log line is not correctly pipe-delimited: 'sh: 1: /octopus/Work/_/bootstrapRunner: Exec format error` This error indicates that the script pod has been scheduled onto a node with a different architecture (ARM/AMD) to the Tentacle pod. There is currently a limitation that the script pods must run on the nodes with the same architecture as the Tentacle pod. This is due to a bootstrap runner utility that is built and packaged into the tentacle container image, but is run inside the script pod. To mitigate this issue, you can set the pod affinity for both the Tentacle and script pods as part of a Helm update command. The steps to do this are: 1. Determine what node architecture you want to run on. This will be either `amd64` or `arm64` 2. Create a YAML file with the following content (replacing `[ARCH]` with the architecture determined in 1.): ```yaml agent: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - [ARCH] scriptPods: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - [ARCH] ``` 3. In a terminal connected to the cluster with the agent installed, run a Helm upgrade referencing the YAML file created above. You can get the `HELM-RELEASE-NAME` and `NAMESPACE` from the **Connectivity** page on the **Deployment Target** or **Worker** details page ```bash helm upgrade --atomic --namespace [NAMESPACE] --reset-then-reuse-values -f [YAML-FILENAME] [HELM-RELEASE-NAME] oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` ## Health Checks and Upgrades ### `error looking up service account octopus-agent-XXX/octopus-agent-auto-upgrader: serviceaccount \"octopus-agent-auto-upgrader\" not found` This error occurs when certain versions of Octopus Server attempt to run a health check using a Kubernetes service account that was added in a later version of the Kubernetes agent. In version `2024.3.11946` onwards and all `2024.4` versions, Octopus Server uses the `octopus-agent-auto-upgrader` service account to perform health checks and upgrades. However, this service account was added in `1.16.0` and `2.2.0` of the Kubernetes agent Helm chart. This means, that if your version of Octopus Server is trying to use that service account, but the installed agent is on version before the version it was added, you will receive an error like ```text Operation returned an invalid status code 'Forbidden', response body {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"octopus-script-xxx\" is forbidden: error looking up service account octopus-agent-XXX/octopus-agent-auto-upgrader: serviceaccount \"octopus-agent-auto-upgrader\" not found","reason":"Forbidden","details":{"name":"octopus-script-xxx","kind":"pods"},"code":403} ``` To fix this issue, the agent must be manually upgraded to a version greater than `1.16.0` or `2.2.0`, depending on the installed major version. Once this has been done, then health checks and automatic upgrades will work again. To manually upgrade, run the command below that matches the major version range of your installed agent/worker. This can be found by going to the **Connectivity** page on the **Deployment Target** or **Worker** details page and noting the **Current Version**. You should also note the **Helm Release Name** and **Namespace**, which are used in the command. #### V1 ```bash helm upgrade --atomic --namespace [NAMESPACE] --version "1.*.*" [HELM-RELEASE-NAME] oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` #### V2 ```bash helm upgrade --atomic --namespace [NAMESPACE] --version "2.*.*" [HELM-RELEASE-NAME] oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` Executing this command in a terminal connected to the Kubernetes cluster will result in the agent/worker being upgraded to the latest version as well as re-enabling health checks and automatic upgrades. ## SSL Connection Issues ### The Tentacle pod fails with the error `Checking that server communications are open failed with message The SSL connection could not be established, see inner exception` This error indicates that the agent was unable to complete the initial SSL handshake with the Octopus Server. There are various reasons why this error may occur, but a likely cause is incompatibility with the SSL certificate configuration. Specifically, the agent **does not support SHA1RSA certificates when the Octopus Server is running on Windows Server 2012 R2**. If your setup matches this configuration and the inner exception in the error stack includes a message like `error:0A00042E:SSL routines::tlsv1 alert protocol version`, this likely indicates that the SSL connection issue is due to the certificate incompatibility. For detailed instructions on diagnosing and resolving this issue, please refer to the guide on this [page](/docs/kubernetes/targets/kubernetes-agent/troubleshooting/sha1-certificate-incompatibility). # First Kubernetes deployment Source: https://octopus.com/docs/kubernetes/tutorials.md 👋 Welcome to Octopus Deploy! This tutorial will walk you through sourcing YAML files from a Git repository, and deploying them to a Kubernetes cluster. :::div{.hint} If you’re using Octopus **2024.2** or earlier, please visit the legacy [Kubernetes First deployment](https://octopus.com/docs/kubernetes/tutorials/legacy-guide) guide. ::: ## Before you start To follow this tutorial, you need: * **An Octopus instance.** If you don’t already have one, you can get started with a free [Octopus Cloud](https://octopus.com/free-signup) account. * **A Kubernetes cluster** you have terminal access to. If you don’t have one you can [install minikube locally](https://oc.to/minikube). * A [GitHub account](https://github.com/) with access to a repository with YAML files to deploy, or you can fork our sample repository below. #### GitHub repository To start quickly, you can fork our sample GitHub repository, which includes pre-created YAML files. Follow the steps below to fork the repository: 1. Navigate to the **[OctoPetShop](https://github.com/OctopusSamples/OctoPetShop.git)** repository. :::figure ![Sample OctoPetShop GitHub repository](/docs/img/getting-started/first-kubernetes-deployment/images/octopetshop-repo.png) ::: 2. In the top-right corner of the page, click **Fork**. 3. Provide an **Owner and repository name**, for example `OctoPetShop`. 4. Keep the **Copy the master branch only** checkbox selected. 5. Click **Create Fork**. 6. Wait for the process to complete (this should only take a few seconds). Now you're ready, let’s begin deploying your first application to Kubernetes. ## Log in to Octopus 1. Log in to your Octopus instance and click **New Project**. :::figure ![Get started welcome screen](/docs/img/getting-started/first-kubernetes-deployment/images/get-started.png) ::: ## Add project Projects let you manage software applications and services, each with their own deployment process. 2. Give your project a descriptive name, for example, `First K8s deployment`. Octopus lets you store your deployment process, settings, and non-sensitive variables in either Octopus or a Git repository. 3. For this example, keep the default **Octopus** option selected. 4. For **Deploy to**, select the **Kubernetes** option. 5. For **Manage with**, select the **YAML files** option. 6. Click **Create Project**. :::figure ![Add new project screen](/docs/img/getting-started/first-kubernetes-deployment/images/add-new-project.png) ::: ## Add environments You'll need an environment to deploy to. Environments are how you organize your infrastructure into groups representing the different stages of your deployment pipeline. For example, Development, Staging, and Production. 7. Keep the default environments and click **Create Environments**. :::figure ![Environment selection options and deployment lifecycle visuals](/docs/img/getting-started/first-kubernetes-deployment/images/select-environments.png) ::: ## Connect Octopus to your Kubernetes cluster With Octopus Deploy, you can deploy software to: * Kubernetes clusters * Microsoft Azure * AWS * Cloud regions * Windows servers * Linux servers * Offline package drop Regardless of where you’re deploying your software, these machines and services are known as your deployment targets. 8. Select **Yes** for **Do you have a Kubernetes cluster you can deploy to today?** 9. Click **Add Agent**. :::figure ![Connect Octopus to your cluster](/docs/img/getting-started/first-kubernetes-deployment/images/connect-octopus-to-kubernetes.png) ::: ### Name 10. Provide a name to identify this cluster in Octopus, for example, `K8s Tutorial Cluster`. ### Environments For now, we’ll use one cluster for all environments, and use separate namespaces for each. Later, you can add additional clusters and scope them to individual environments. 11. Select **Development**, **Staging**, and **Production** from the **Environments** dropdown list. ### Target Tags Octopus uses target tags to select which clusters (known in Octopus as a deployment target) a project should deploy to. Later, you’ll add the same target tag to your deployment process. You can deploy to multiple clusters simply by adding this tag. 12. Add a new target tag by typing it into the field. For this example, we’ll use `tutorial-cluster`. ### Advanced settings In Advanced settings, you can provide an optional Kubernetes namespace and Storage class. These are advanced features that you can skip for this tutorial. 13. Click **Next**. :::figure ![Add new Kubernetes Agent dialog](/docs/img/getting-started/first-kubernetes-deployment/images/add-kubernetes-agent.png) ::: ### Install NFS CSI Driver The Kubernetes agent will run as a pod, and will need some resilient storage. For this tutorial we can install the NFS driver, and let the agent provision some shared storage for it to use. 14. **Copy** the Helm command and run it in the terminal connected to your target cluster. 15. Click **Next**. :::figure ![Install NFS CSI Driver dialog](/docs/img/getting-started/first-kubernetes-deployment/images/install-nfs-csi-driver.png) ::: ### Install Kubernetes Agent Octopus generates a Helm command that you copy and paste into a terminal connected to the target cluster. After it's executed, Helm installs all the required resources and starts the agent. 16. **Copy** the Helm command. 17. After the NFS Helm command has finished running, **paste** and run the agent Helm command in the terminal connected to your target cluster. :::figure ![Install Kubernetes Agent dialog](/docs/img/getting-started/first-kubernetes-deployment/images/install-agent.png) ::: 18. After the agent has successfully registered and passed the health check, **Close** the dialog. 19. Click **Next**. ## Create deployment process The next step is creating your deployment process. This is where you define the steps that Octopus uses to deploy your software. Based on your project setup, the _Deploy Kubernetes YAML_ deployment step has already been added and partially configured for you. 1. Click **Thanks, got it**. ### Step Name You can leave this as the default _Deploy Kubernetes YAML_. ### Target Tags 2. Octopus pre-selected the target tag you created while configuring the Kubernetes agent ( `tutorial-cluster`). :::figure ![Target tags expander with tutorial-cluster tag selected](/docs/img/getting-started/first-kubernetes-deployment/images/target-tags.png) ::: ### YAML source You can source YAML files via 3 methods: * Directly from a **Git Repository**, loaded at deployment time. * Contained within a **Package**, like a ZIP or NuGet file. * **Inline YAML** that you paste directly into the step. Sourcing from a Git Repository can streamline your deployment process by reducing the steps to get your YAML into Octopus. 3. Select **Git Repository** as your YAML source. :::figure ![YAML source expander where users can select where to source YAML files from](/docs/img/getting-started/first-kubernetes-deployment/images/yaml-source.png) ::: ### Repository URL 4. Enter the full URL to the Git repository where you store the YAML files you want to deploy, for example, `https://github.com/your-user/OctoPetShop.git` :::figure ![Repository URL expander where the user's YAML files are stored](/docs/img/getting-started/first-kubernetes-deployment/images/repo-url.png) ::: ### Git repository details 5. Select **Git credentials** and click the **+** icon to add new credentials. 6. Enter a name for your Git credential so you can identify it later. 7. Provide your GitHub username. :::figure ![A drawer interface where users can configure Git credentials](/docs/img/getting-started/first-kubernetes-deployment/images/git-credentials.png) ::: ### Generate GitHub personal access token Github.com now requires token-based authentication (this excludes GitHub Enterprise Server). Follow the steps below to create a personal access token, or learn more in the [GitHub documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens). 1. Navigate to [github.com](http://github.com) and log in to your account. 2. Click your profile picture in the top right corner. 3. Click **Settings**. 4. Scroll down to the bottom of the page and click **Developer settings**. 5. Under Personal access tokens, click **Fine-grained tokens**. 6. Click **Generate new token**. 7. Under **Token name**, enter a name for the token. 8. Under **Expiration**, provide an expiration for the token. 9. Select a Resource Owner. 10. Under **Repository access**, choose **Only select repositories** and select the **OctoPetShop** repository from the dropdown. 11. Click on **Repository permissions**, scroll down to **Contents**, and select **Read-only**. 12. Scroll down to the Overview, and you should have 2 permissions for one of your repositories (contents and metadata). 13. Click **Generate token** and copy the token. :::figure ![A GitHub settings page where users can manage permissions for fine-grained tokens](/docs/img/getting-started/first-kubernetes-deployment/images/generate-token.png) ::: ### Git repository details 8. Paste the token into Octopus's personal access token field. 9. **Save** your Git credential. Your new Git credential should now be selected in the **Authentication** dropdown. :::figure ![Authentication expander with a Git repository selected](/docs/img/getting-started/first-kubernetes-deployment/images/authentication.png) ::: ### Branch settings 10. Provide the default branch you want to use. For example, `master` if you’re using the sample repo. :::figure ![Branch setting expander where user can configure default branch](/docs/img/getting-started/first-kubernetes-deployment/images/branch-settings.png) ::: ### File Paths 11. Enter the relative path(s) to the YAML files you want to deploy to your cluster. If you’re using the sample repo, use `k8s/*.yaml` to select all YAML files in the k8s root folder. :::figure ![File paths expander where user can configure path to YAML files](/docs/img/getting-started/first-kubernetes-deployment/images/file-paths.png) ::: ### Namespace 12. Specify the namespace you want to deploy your YAML files into, for example, `k8s-tutorial`. If the namespace doesn’t exist yet, Octopus will create it during the deployment. You can skip the other sections of this page for this tutorial. **Save** your step and you can move on to create and deploy a release. ## Release and deploy ### Create release A release is a snapshot of the deployment process and the associated assets (Git resources, variables, etc.) as they exist when the release is created. 1. Click the **Create Release** button. You’ll see a summary of the Git resources you provided in the _Deploy Kubernetes YAML_ step. :::figure ![Release summary showing Git resources](/docs/img/getting-started/first-kubernetes-deployment/images/release-summary.png) ::: 2. Click **Save**. ### Execute deployment Deployments typically occur in a defined environment order (for example, Development ➜ Staging ➜ Production), starting with the first one. Later you can configure Lifecycles with complex promotion rules to accurately reflect how you want to release software. 1. Click **Deploy to Development** to deploy to the development environment associated with your cluster. 2. Review the preview summary and when you’re ready, click **Deploy**. Your first deployment may take slightly longer as we download and extract the necessary tools to run steps. ### Watch the deployment complete The **Task Summary** tab will show you in real-time how the deployment steps are progressing. You can also view the status of Kubernetes resources being deployed on the cluster itself. 3. Navigate to the **Object Snapshot** view in the **Kubernetes** tab to see the real-time status of your Kubernetes objects as the deployment progresses. :::figure ![Object Status dashboard showing a successful deployment](/docs/img/getting-started/first-kubernetes-deployment/images/deployment-success.png) ::: You successfully completed your first deployment to Kubernetes! 🎉 ### Monitor and troubleshoot 4. If you're deploying to the Kubernetes Agent, keep monitoring your application health using the [live object status](/docs/kubernetes/live-object-status) feature. :::figure ![A screenshot of the Space dashboard showing live status](/docs/img/kubernetes/live-object-status/live-status-page.png) ::: As you continue to explore Octopus Deploy, consider diving deeper into powerful features like [variables](https://octopus.com/docs/projects/variables), joining our [Slack community](http://octopususergroup.slack.com), or checking out our other tutorials to expand your knowledge. ## Additional Kubernetes resources * [Deploy with the Kustomize step](https://octopus.com/docs/deployments/kubernetes/kustomize) * [Deploy a Helm chart](https://octopus.com/docs/deployments/kubernetes/helm-update) * [Using variables for Kubernetes without breaking YAML](https://octopus.com/blog/structured-variables-raw-kubernetes-yaml) # Moving Octopus Server folders Source: https://octopus.com/docs/administration/managing-infrastructure/server-configuration-and-file-storage/moving-octopus-server-folders.md If you need to move any of the folders used by the Octopus Server you can follow the instructions on this page to move individual folders and reconfigure the Octopus Server to use the new folder locations. ## Move Octopus home folder {#move-octopus-home-folder} If you need to move the Octopus home folder, you can do that using the command-line as described below: **Usage** ```powershell Usage: Octopus.Server configure [] ``` Where `[]` is: **Options** ```powershell --instance=VALUE Name of the instance to use --home=VALUE Home directory Or one of the common options: --console Don't attempt to run as a service, even if the user is non-interactive --nologo Don't print title or version information ``` A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. ```powershell $oldHome = "C:\Octopus" $newHome = "C:\YourNewHomeDir" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" $newConfig = $newHome + "\OctopusServer.config" & "$octopus" service --stop mv $oldHome $newHome &"$octopus" delete-instance --instance=OctopusServer &"$octopus" create-instance --instance=OctopusServer --config=$newConfig & "$octopus" configure --home="$newHome" & "$octopus" service --start ``` ## Move other Octopus Server folders {#move-other-folders} If you need to move other folders than the Octopus Home folder, you can do that using the command-line as described below **Usage** ```powershell Octopus.Server path [] ``` Where `[]` is any of: **Options** ```powershell --instance=VALUE Name of the instance to use --clusterShared=VALUE Set the root path where shared files will be stored for Octopus clusters --nugetRepository=VALUE Set the package path for the built-in NuGet repository. --artifacts=VALUE Set the path where artifacts are stored --imports=VALUE Set the path where imported zip files are stored --taskLogs=VALUE Set the path where task logs are stored --eventExports=VALUE Set the path where event audit logs are exported --telemetry=VALUE Set the path where telemetry is stored Or one of the common options: --console Don't attempt to run as a service, even if the user is non-interactive --nologo Don't print title or version information ``` ## Move NuGet repository folder {#move-nuget-folder} A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. The new path will apply to existing packages in the repository, so it is important to move the packages. ```powershell $oldNuGetRepository = "C:\Octopus\Packages" $newNuGetRepository = "C:\YourNewHomeDir\Packages" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" & "$octopus" service --stop mv $oldNuGetRepository $newNuGetRepository & "$octopus" path --nugetRepository="$newNuGetRepository" & "$octopus" service --start ``` The restart of the service will re-index the directory. If it is missing files, they will then go missing from the internal repository and again from your releases. So be sure that all files are moved. The above script will take the server offline for the duration of the move. If there are a large number of packages, this can be quite some time, and taking the server offline for the duration may not be possible. To prevent the server re-indexing all the packages however, the packages should not be removed from expected folder while the server is running. Therefore, an alternative approach is to: 1. Copy the folder while the server is running. 1. Stop the server. 1. Use a file mirroring tool like `robocopy` to ensure the new folder reflects the added and removed files while the copy was running. 1. Update the path in Octopus config. 1. Start the server. ## Move the artifacts folder {#move-artifacts-folder} A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. ```powershell $oldArtifacts = "C:\Octopus\Artifacts" $newArtifacts = "C:\YourNewHomeDir\Artifacts" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" & "$octopus" service --stop mv $oldArtifacts $newArtifacts & "$octopus" path --artifacts="$newArtifacts" & "$octopus" service --start ``` ## Move the task logs folder {#move-task-logs-folder} A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. ```powershell $oldTaskLogs = "C:\Octopus\TaskLogs" $newTaskLogs = "C:\YourNewHomeDir\TaskLogs" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" & "$octopus" service --stop mv $oldTaskLogs $newTaskLogs & "$octopus" path --taskLogs="$newTaskLogs" & "$octopus" service --start ``` ## Move the event exports folder {#move-event-exports-folder} A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. ```powershell $oldEventExports = "C:\Octopus\EventExports" $newEventExports = "C:\YourNewHomeDir\EventExports" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" & "$octopus" service --stop mv $oldEventExports $newEventExports & "$octopus" path --eventExports="$newEventExports" & "$octopus" service --start ``` ## Move the telemetry folder {#move-telemetry-folder} A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. ```powershell $oldTelemetry = "C:\Octopus\Telemetry" $newTelemetry = "C:\YourNewHomeDir\Telemetry" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" & "$octopus" service --stop mv $oldTelemetry $newTelemetry & "$octopus" path --telemetry="$newTelemetry" & "$octopus" service --start ``` ## Move the imports folder {#move-imports-folder} A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. ```powershell $oldImports = "C:\Octopus\Imports" $newImports = "C:\YourNewHomeDir\Imports" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" & "$octopus" service --stop mv $oldImports $newImports & "$octopus" path --imports="$newImports" & "$octopus" service --start ``` # In place upgrade (Install Over 2.6.5) Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/in-place-upgrade-install-over-2.6.5.md You can perform an in place upgrade from **Octopus 2.6.5** to **Octopus 2018.10 LTS**, but you need to upgrade your Tentacles first. Due to the new communication method, you won't be able to communicate with your upgraded Tentacles until you upgrade your server. However, if you upgrade your server before all Tentacles are correctly updated, you will have to upgrade them manually, or roll your server back to **Octopus 2.6.5** and try again. ## Step by step To perform an in-place upgrade, follow these steps carefully: ### 1. Back up your Octopus 2.6.5 database and Master Key See the [Backup and restore](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/backup-2.6/)[ page for instructions on backing up your database.](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/backup-2.6) ### 2. Use Hydra to automatically upgrade your Tentacles Hydra is a tool we've built that will help you update your Tentacles to the latest version. It is particularly useful migrating from 2.6.5 to 2018.10 LTS as the communication methods have changed. :::div{.problem} This is the point of no return. When your Tentacles are upgraded to 3.x your 2.6.5 server will not be able to communicate with them. We strongly recommend testing Hydra against a small subset of "canary" machines before upgrading the rest of your machines. The best way to do this is: 1. Create a new "canary" machine role and assign it to a few machines. 2. Set the Update Octopus Tentacle step to run on machines with the "canary" role. 3. Once you are confident the Tentacle upgrade works as expected, you can use Hydra to upgrade all remaining machines. ::: #### How does Hydra work? Hydra consists of two parts: 1. A package that contains the latest Tentacle MSI installers. 2. An **Octopus 2.6.5** step template that does the upgrade to your environments. To account for issues with communicating with a Tentacle that has been 'cut off' from its Octopus Server, the Hydra process connects to the Tentacle and creates a scheduled task on the Tentacle Machine. If it is able to schedule the task it considers that install a success. The task runs one minute later. The scheduled task does the following: 1. Find Tentacle services. 2. Stop all Tentacles (if they're running). 3. Run MSI. 4. Update configs for any polling Tentacles. 5. Starts any Tentacles that were running when we started. With just one Tentacle service this should be a very quick process, but we cannot estimate how long it may take with many Tentacle services running on the one machine. #### Common problems using Hydra The scheduled task is set to run as `SYSTEM` to ensure the MSI installation will succeed. If your Tentacles are running with restricted permissions, they may not be able to create this scheduled task. **The only option is to upgrade your Tentacles manually.** Hydra performs a Reinstall of each Tentacle. As part of the reinstall, the Service Account is reset to `Local System`. If you need your Tentacles to run under a different account, you will have to make the change after the upgrade completes (after you've re-established a connection from 2018.10 LTS). You can do this manually, or using the following script: ```powershell Tentacle.exe service --instance "Tentacle" --reconfigure --username=DOMAIN\ACCOUNT --password=your-password --start --console ``` #### Let's upgrade these Tentacles! To use Hydra, follow these steps: :::div{.hint} These steps should be executed from your **Octopus 2.6.5** server to your 2.6 Tentacles. ::: 1. Download the latest Hydra NuGet package from [https://octopus.com/downloads/latest/Hydra](https://octopus.com/downloads/latest/Hydra). 2. Use the Upload Package feature of the library to upload the OctopusDeploy. Hydra package to the built-in NuGet repository on your **Octopus 2.6.5** server. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278019.png) ::: 3. Import the [Hydra step template](https://library.octopus.com/step-templates/d4fb1945-f0a8-4de4-9045-8441e14057fa/actiontemplate-hydra-update-octopus-tentacle) from the Community Library. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278018.png) ::: 4. Create a [new project](/docs/projects) with a single "Update Octopus Tentacle" step from the step template. 1. Ensure you choose or create a [Lifecycle](/docs/releases/lifecycles) that allows you to deploy to all Tentacles. 2. Ensure you set the Update Octopus Tentacle step to run for all appropriate Tentacles. 3. Set the `Server Mapping` field: - If you only use listening Tentacles you can leave the `Server Mapping` field blank. - If you are using any polling Tentacles, add the new **Octopus 2018.10 LTS** server address (including the polling TCP port) in the Server Mapping field. See below for examples. :::div{.hint} **Server mapping for Polling Tentacles** It is very important you get this value correct. An incorrect value will result in a polling Tentacle that can't be contacted by neither a 2.6.5 or 2018.10 LTS server. Several different scenarios are supported: 1. A single Polling Tentacle instance on a machine pointing to a single Octopus Server **the most common case**: - Just point to the new server's polling address `https://newserver:newport` like `https://octopus3.mycompany.com:10934` 2. Multiple Polling Tentacle instances on the same machine pointing to a single Octopus Server: - Just point to the new server's polling address `https://newserver:newport` like `https://octopus3.mycompany.com:10934` and Hydra will automatically update all Tentacles to point to the new server's address 3. Multiple Polling Tentacle instances on the same machine pointing to different Octopus Servers **a very rare case**: - Use this syntax to tell Hydra the mapping from your old Octopus Server to your new Octopus Server: `https://oldserver:oldport=>https://newserver:newport,https://oldserver2:oldport2/=>https://newserver2:newport2` where each pair is separated by commas. This will match the first case and replace it => with the second case. Click the ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278017.png) help button for more detailed instructions. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278014.png) ::: ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278015.png) ::: 5. Create a release and deploy. The deployment should succeed, and one minute later the Tentacles will be upgraded. ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278010.png) ### 3. Verify the upgrade worked When the Hydra task runs on a Tentacle machine, it should no longer be able to communicate with the **Octopus 2.6.5** server. You can verify this by navigating to the Environments page and clicking **Check Health**. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278012.png) ::: After successfully updating your Tentacles, you should see this check fail from your 2.6.5 server. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278011.png) ::: We recommend connecting to some of your Tentacle machines and examining the Octopus Tentacle binaries to ensure they have been upgraded. You should also ensure the service is running (even though it will not be able to communicate with the server). :::div{.hint} If you have multiple Tentacles running on the same server, an update to one will result in an update to **all** of them. This is because there is only one copy of the Tentacle binaries, even with multiple instances configured. ::: ### 4. Install Octopus 2018.10 LTS on your Octopus Server :::div{.success} **Upgrade to the latest version** When upgrading to **Octopus 2018.10 LTS** please use the latest version available. We have been constantly improving the **Octopus 2.6.5** to **Octopus 2018.10 LTS** data migration process while adding new features and fixing bugs. ::: See the [Installing Octopus 2018.10 LTS](/docs/installation) page for instructions on installing a new **Octopus 2018.10 LTS** instance. After installing the MSI, you will be presented with an upgrade page. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278008.png) ::: Click "Get started..." and set up your database connection. You may need to grant permission to the `NT AUTHORITY\SYSTEM` account at this stage. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278007.png) ::: Click Next, and then Install to install the **Octopus 2018.10 LTS** server over the **Octopus 2.6.5** instance. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278006.png) ::: ### 5. Restore the Octopus 2.6.5 database using the Migration Tool After upgrading, the Octopus Manager will prompt to import your **Octopus 2.6.5** database. Click the *Import data...* button and follow the prompts to import your **Octopus 2.6.5** data. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278005.png) ::: See the [Migrating data from Octopus 2.6.5 to 2018.10 LTS](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/migrating-data-from-octopus-2.6.5-2018.10lts) page for more detailed instructions on importing your **Octopus 2.6.5** database backup into **Octopus 2018.10 LTS**. :::div{.hint} **Migration taking a long time?** By default, we migrate everything from your backup including historical data. You can use the `maxage=` argument when executing the migrator to limit the number of days to keep. For example: `maxage=90` will keep 90 days of historical data ignoring anything older. To see the command syntax click the **Show script** link in the wizard. ::: :::div{.hint} **Using the built-in Octopus NuGet repository?** If you use the built-in [Octopus NuGet repository](/docs/packaging-applications/package-repositories) you will need to move the files from your **Octopus 2.6.5** server to your **Octopus 2018.10 LTS** server. They are not part of the backup. In a standard **Octopus 2.6.5** install the files can be found under `C:\Octopus\OctopusServer\Repository\Packages` You will need to transfer them to the new server to `C:\Octopus\Packages`. Once the files have been copied, go to **Library ➜ Packages ➜ Package Indexing** and click the `RE-INDEX NOW` button. This process runs in the background, so if you have a lot of packages it could take a while (5-20 mins) to show in the UI or be usable for deployments. ::: ### 6. Verify connectivity between the 2018.10 LTS server and your Tentacles Log in to your new **Octopus 2018.10 LTS** server and run health checks on all of your environments. If the upgrade completed successfully, they should succeed. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278009.png) ::: If one or more health checks do not succeed after a few attempts, see the Troubleshooting section to identify possible issues. ### Optionally clean up your Octopus home folder We leave some files used by **Octopus 2.6.5** in place so you can roll back if necessary. After the upgrade is complete these files will never be used again and can be safely deleted. You can follow the instructions on this [page](/docs/administration/managing-infrastructure/server-configuration-and-file-storage\#ServerConfigurationAndFileStorage-CleanUp) to clean up files left over from your **Octopus 2.6.5** to **Octopus 2018.10 LTS** upgrade. # ASP.NET Core webapp Source: https://octopus.com/docs/deployments/dotnet/netcore-webapp.md ASP.NET Core is fast becoming the de-facto web framework in .NET. Compared to earlier versions of ASP.NET, it contains many changes to how applications are built, and how they are run. If you are new to ASP.NET Core you can start with the [Tutorial: Get Started ASP.Net Core tutorial](https://docs.microsoft.com/en-us/aspnet/core/getting-started/?view=aspnetcore-5.0). ## Publishing and packing the website {#publishing-and-packing-the-website} When your application is ready, it needs to be published: ```powershell # Publish the application to a folder dotnet publish source/MyApp.Web --output published-app --configuration Release ``` When your application has been published you need to package it: ```powershell # Package the folder into a ZIP octopus package zip create --id 'MyApp.Web' --version '1.0.0' --base-path 'published-app' ``` For more information about packaging applications see [Creating packages using the Octopus CLI](/docs/packaging-applications/create-packages/octopus-cli). If you are using the [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository/#pushing-packages-to-the-built-in-repository) you can create a [zip file](/docs/packaging-applications/create-packages/octopus-cli#create-zip-packages). When you have your generated nupkg or zip file it needs to be [pushed to a repository](/docs/packaging-applications/package-repositories). If you are using TeamCity, you can use the [new TeamCity plugin for dotnet commands](https://github.com/JetBrains/teamcity-dnx-plugin). :::div{.warning} **OctoPack and .NET Core** OctoPack is not compatible with .NET Core applications. If you want to package .NET Core applications see [create packages with the Octopus CLI](/docs/packaging-applications/create-packages/octopus-cli). ::: ## Deployment {#DeployingASP.NETCoreWebApplications-Deployment} ASP.NET Core applications can either run as a command line program with Kestrel, or under IIS ([which also uses Kestrel - check out the book for details](https://leanpub.com/aspnetdeployment)). :::div{.hint} See the [ASP.NET Core IIS documentation](https://docs.asp.net/en/latest/publishing/iis.html#install-the-http-platform-handler) for instructions on setting up IIS for ASP.NET Core. ::: When running under IIS, ensure the .NET CLR Version is set to `No Managed Code`. ## Antiforgery cookie {#DeployingASP.NETCoreWebApplications-AntiforgeryCookie} The `.AspNetCore.Antiforgery` cookie created by ASP.NET Core uses the application path to generate its hash. By default, Octopus will deploy to a new path every time, which causes a new cookie to be set every deploy. This results in many unneeded cookies in the browser. See this [blog post](http://blog.novanet.no/a-pile-of-anti-forgery-cookies/) for more details. To change this behavior, set the Antiforgery token in your `startup.cs` like this: ```csharp public void ConfigureServices(IServiceCollection services) { services.AddAntiforgery(opts => opts.CookieName = "AntiForgery.MyAppName"); } ``` ## Cookie authentication in ASP.NET Core 2 {#DeployingASP.NETCoreWebApplications-AuthCookie} Similar to antiforgery cookies, cookie authentication in ASP.NET Core 2 uses Microsoft's data protection API (DPAPI) which can use the application path to isolates applications from one another. This can cause older cookies to simply not work. To change this behavior, you need to set the application name in your `startup.cs` like this: ```csharp public void ConfigureServices(IServiceCollection services) { services.AddDataProtection().SetApplicationName("my application"); } ``` ## Configuration {#DeployingASP.NETCoreWebApplications-Configuration} Refer to [structured configuration variables](/docs/projects/steps/configuration-features/structured-configuration-variables-feature) on how to setup configuration. ## Learn more - Generate an Octopus guide for [ASP.NET Core and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?application=ASP.NET%20Core). # Configuring target machine Source: https://octopus.com/docs/deployments/nginx/configure-target-machine.md :::div{.hint} This guide can be used with an AWS AMI instance of Ubuntu 14.04 LTS or an Azure VM running Ubuntu 14.04 LTS. If you want to use a different base instance there may be some slightly different steps you need to take during the configuration. ::: Deploying projects over [SSH](/docs/infrastructure/deployment-targets/linux/ssh-target) has some slightly different requirements to a standard Tentacle. Although you don't need to install and run a Tentacle service, there is some configuration that is required to allow Calamari to run on non Windows systems. ## Install .NET Core {#ConfigureTargetMachine-InstallDotNetCore} :::div{.hint} **Authoritative documentation** The best and most up-to-date guide to installing .NET will continue to be on the [.NET website](https://www.microsoft.com/net/download/linux-package-manager/ubuntu16-04/runtime-current). More detailed instructions can be found on their website which may change in future versions so check their documentation out for more info. ::: ### Register Microsoft key and feed Before installing .NET, you'll need to register the Microsoft key, register the product repository, and install required dependencies. This only needs to be done once per machine. Open a command prompt and run the following commands: ```bash wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb sudo dpkg -i packages-microsoft-prod.deb ``` ### Install .NET SDK Update the products available for installation, then install the .NET SDK. In your command prompt, run the following commands: ```bash sudo apt-get install apt-transport-https sudo apt-get update sudo apt-get install aspnetcore-runtime-2.1 ``` ## Install NGINX {#ConfigureTargetMachine-InstallNginx} :::div{.hint} **Authoritative Documentation** The best and most up-to-date guide to installing NGINX will continue to be on the [NGINX website](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/). More detailed instructions can be found on their website which may change in future versions so check their documentation out for more info. ::: ### Download the key used to sign NGINX packages and the repository, and add it to the `apt` program's key ring: ```bash $ sudo wget https://nginx.org/keys/nginx_signing.key $ sudo apt-key add nginx_signing.key ``` ### Edit the **/etc/apt/sources.list** file, for example with `vi`: ```bash $ sudo vi /etc/apt/sources.list ``` ### Add these lines **sources.list** to name the repositories from which the NGINX Open Source source can be obtained: ``` deb https://nginx.org/packages/mainline/ubuntu/ nginx deb-src https://nginx.org/packages/mainline/ubuntu/ nginx ``` where: - The `/mainline` element in the pathname points to the latest mainline version of NGINX Open Source; delete it to get the latest stable version - `` is the codename of an Ubuntu release For example, to get the latest mainline package for Ubuntu 16.04 (*xenial*), add: ```bash deb https://nginx.org/packages/mainline/ubuntu/ xenial nginx deb-src https://nginx.org/packages/mainline/ubuntu/ xenial nginx ``` Save the changes and quit `vi` (press **ESC** and type `wq` at the `:` prompt). ### Install NGINX open source: ```bash $ sudo apt-get remove nginx-common $ sudo apt-get update $ sudo apt-get install nginx ``` ### Start NGINX open source: ```bash $ sudo nginx ``` ### Verify that NGINX open source is up and running: ```bash $ curl -I 127.0.0.1 HTTP/1.1 200 OK Server: nginx/1.13.8 ``` ## Add user {#ConfiguringTargetMachine-AddUser} Rather than connecting and deploying your application as the root user, you should create a custom user account that will be used for the purposes of deployment. The login credentials will then be able to be easily revoked without affecting other users who access the machine. Resources will also be able to be more granularly assigned, allowing greater control if the account is used maliciously. :::div{.hint} **Security** Entire books have been published on the subject of security on Unix based systems. These steps are intended to serve a basic level of security, while making sure you stop and consider the role that it plays in your environment. ::: In this case we are going to create a simple user account with a password which will be used for both the deployment process and running the application process itself. In your case you may want to use different accounts for each task. Replace **<the-password-you-want>** with a random password of your choice and remember this value as it will be needed later when configuring the target on the Octopus Server ```bash sudo useradd -m octopus echo octopus: | sudo chpasswd ``` By default, the AWS Ubuntu AMI only allows authentication via SSH keys and not password. Although passwords are typically less secure, for the purposes of this guide we are going to enable their use. ### Enable password authentication in AWS ```bash sudo sed -i.bak -e s/'PasswordAuthentication no'/'PasswordAuthentication yes'/g /etc/ssh/sshd_config sudo restart ssh ``` ### Enable 'sudo' access without password {#enable-sudo-without-password} By default, `sudo` requires the user to enter their password, but this won't work in a non-interactive session such as that of a running deployment. To successfully use the new *NGINX* feature in Octopus Deploy we need `sudo` access without password prompt for few commands `cp`, `mv`, `rm`, and `nginx` and for this guide we will also need to add `systemctl` to the list of required commands. So, we need to configure this for our user that we will be using for the purposes of deployment. See [Sudo commands](/docs/infrastructure/deployment-targets/linux/sudo-commands) for more details on how to disable password prompt for all commands. To enable `sudo` without password prompt for only the required commands for NGINX, add the following lines into your file and then save the file: ```bash Cmdn_Alias REQUIRED_NGINX_COMMANDS = /bin/cp, /bin/mv, /bin/rm, /bin/systemctl, /usr/sbin/nginx octopus ALL=(ALL) NOPASSWD: REQUIRED_NGINX_COMMANDS ``` ## Learn more - Generate an Octopus guide for [NGINX and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=NGINX). # Dynamically selecting packages at deployment time Source: https://octopus.com/docs/deployments/packages/dynamically-selecting-packages.md When configuring a step in Octopus which uses a package, you are able to use variables to dynamically select the Package feed and/or the Package ID when your project is deployed. ## Example scenarios We typically recommend using a static package configuration wherever possible - this is the scenario Octopus optimizes for. However, there are some scenarios where dynamically selected packages are a perfect fit. ### Different package feed for each environment You may want to use a different package feed for each environment. This can help when you have a slow connection between your main package feed and your deployment environments. In this case you could configure a package feed in your remote environments, and instruct Octopus to use the best package feed for each deployment. :::figure ![Defining the feed value as a variable on the package step](/docs/img/deployments/packages/images/dynamic-feed.png) ::: For example, you can bind the Package Feed to `#{FeedId}` and set the following environment-scoped variables: FeedId = my-dev-feed; Scope = Development Environment FeedId = my-test-feed; Scope = Test Environment FeedId = my-production-feed; Scope = Production Environment When deploying to the **Test Environment** Octopus will use the `my-test-feed` package feed. When deploying to the **Production Environment** Octopus will use the `my-production-feed` package feed. :::div{.info} You will need to organize a way to synchronize the package feeds so when you actually deploy your project, the appropriate packages are in the correct feeds. Octopus will not do this for you. ::: ### Different package for each environment or tenant You may want to build a different package for each environment and/or tenant. Again, we recommend avoiding this complexity where possible, but a good example of where it makes sense is when you are providing a tenanted service where each tenant can provide their own styles and assets. In this scenario you could build your common packages, and then build one package per-tenant containing their styles/assets, like this: MyApp.Web.Common.3.1.6.zip MyApp.Web.TenantA.3.1.6.zip MyApp.Web.TenantB.3.1.6.zip Now you can configure Octopus to deploy your common package just like normal, but add one more step to deploy the tenant-specific package using a variable binding for the Package ID: Package ID = MyApp.Web.#{TenantAlias} You can now create the `3.1.6` release for the `MyApp.Web` project, but have Octopus deploy the correct styles/assets package for each tenant at deployment time. :::figure ![Dynamic Package ID](/docs/img/deployments/packages/images/dynamic-package-id.png) ::: :::div{.info} In this example we recommend creating a [tenant-specific variable](/docs/tenants/tenant-variables) called something like `TenantAlias`, where each tenant will provide a value. You could have used a built-in variable like `#{Octopus.Deployment.Tenant.Name}` but then your tenant name would be tightly coupled to your Package ID, and changing the tenant's name could break your deployments. ::: :::div{.hint} Would you like Octopus to deploy a specific version of your application code, but just grab the latest styles/assets package for each tenant? We have an [open GitHub Issue](https://github.com/OctopusDeploy/Issues/issues/2755) discussing this right now. ::: ## Which variables can be used? You can use any values which are either unscoped/global, or values scoped to environments, tenants, or tenant tags. Variable values scoped to other things like [target tags](/docs/infrastructure/deployment-targets/target-tags) and deployment targets are not supported. ## Tradeoffs There are some downsides to using dynamic packages. Firstly it becomes complex quite quickly and should be used only if necessary. ### Try to minimize dynamic packages Where possible we recommend keeping the number and size of dynamic packages to a minimum. Some strategies which can help with this are: 1. Try building any environment or tenant-specific differences using configuration instead of requiring an entirely different package. 2. Try to keep everything that is common about your application together, pushing environment or tenant-specific differences into small satellite packages. ### Dynamic packages and retention policies If you use a binding expression for the Package ID, it becomes more difficult to look at a release and understand exactly which packages will be deployed. This prevents package retention policies from working properly for the built-in package feed and on deployment targets. Learn about [retention policies](/docs/administration/retention-policies). ### Dynamic packages and issue trackers When using a variable expression for the Package ID, you may lose the ability to use [issue tracking](/docs/releases/issue-tracking) in your releases and deployments. If Octopus can't evaluate the variable expression at the time of release creation, Octopus will be unable to link your packages with the associated [build information](/docs/packaging-applications/build-servers/build-information) records. The result will mean no commits or work items (issues) will be included in the release. This also prevents Octopus updating any issue tracker with deployment information where supported e.g., [JIRA](/docs/releases/issue-tracking/jira). ## Troubleshooting 1. Older versions of `octo.exe` and the build server extension would fail to create releases if you are using a variable binding for your Package Feed. You would see an error message like this: `The version number for step 'Deploy' cannot be automatically resolved because the feed or package ID is dynamic.` - The best way to work around this is to upgrade the Octopus CLI or your build server extension. Otherwise, you can work around this by defining an unscoped/global variable with the same name referring to a valid package feed. ![Working around but with older octo.exe](/docs/img/deployments/packages/images/dynamic-feed-variable-workaround.png) 2. You haven't provided a version for each required package in your deployment process. You would see an error message like this: `Package versions could not be resolved for one or more of the package steps in this release. See the errors above for details. Either ensure the latest version of the package can be automatically resolved, or set the version to use specifically by using the --package argument.` - Make sure you specify a package version for the dynamic package. Octopus cannot select a package for you automatically since it doesn't know either the Package Feed to inspect, or the Package ID it should use to find the latest version. # Deploying to Transient Targets Source: https://octopus.com/docs/deployments/patterns/elastic-and-transient-environments/deploying-to-transient-targets.md Transient deployment targets are targets that are intermittently available for a deployment. They frequently join and leave the network causing their deployment availability to become unpredictable. They might be: - Auto-scale instances that are provisioned and terminated. - Laptops that are taken home at night. - Client servers that go down for maintenance. A typical Octopus deployment requires that all deployment targets are available when the deployment starts and will remain available while the deployment is in progress. Elastic Environments provides mechanisms for deploying to targets that may become unavailable while a deployment is in progress. You can also run a [health check](/docs/projects/built-in-step-templates/health-check) during a deployment, and based on those results opt to add or remove machines from the deployment. ## Deploying to Targets that become unavailable during a deployment {#targets-become-unavailable} This example uses the OctoFX project that does a deployment to two [target tags](/docs/infrastructure/deployment-targets/target-tags): **RateServer** and **TradingWebServer**. We have decided to auto-scale the machines in the **TradingWebServer** tag and want to continue deploying the website to the available machines, ignoring any machines that are no longer available, perhaps due to being scaled down. 1. Navigate to the OctoFX project overview page. 2. Select the **Settings** option and expand the **Deployment Target** section. 3. Under *Unavailable Deployment targets* click **Skip** and select the target tags that can be skipped, in our example (**TradingWebServer**). If no tag are selected, then any deployment target may be skipped. 4. Create and deploy a release to an environment where deployment targets with the **TradingWebServer** target tag are unavailable. They will be automatically removed from the deployment. :::div{.success} To ensure that a machine which has been skipped is kept up to date, consider [keeping deployment targets up to date](/docs/deployments/patterns/elastic-and-transient-environments/keeping-deployment-targets-up-to-date). ::: ## Including and excluding targets during a deployment {#include-or-exclude-targets} In this example, OctoFX will deploy to **RateServer** and then run a Health Check step before it deploys to **TradingWebServer**, ensuring that only currently available targets are involved in the deployment. 1. Navigate to the OctoFX project process page. 2. Select **Add Step** and then select **Health check**. For more information about adding a step to the deployment process, see the [add step](/docs/projects/steps) section. 3. Configure the Health Check step, exclude deployment targets if they are unavailable and include new deployment targets if they are found: ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/healthcheck.png) 4. Save the step. 5. Back at the deployment process, re-order the steps so that the **Health Check** step occurs before the **Trading Website** step. This will ensure that deployment targets with the **TradingWebServer** target tag are re-evaluated before the trading website is deployed: ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/evaluate.png) 6. Deploy OctoFX to an environment that has some deployment targets with the **TradingWebServer** target tag that are disabled. While the deployment is in progress (but before the Health Check step), enable the disabled targets and disable the enabled targets. When the Health Check step runs: - Any enabled targets that were disabled at the start of the deployment will be included in the deployment. - Any disabled targets that were enabled at the start of the deployment will be excluded from the deployment. In this case, the machine **SWeb01** has been found and included in the rest of the deployment: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/newtarget.png) ::: Now that deployment targets can be automatically removed from a deployment, it may be useful to [keep them up to date when they become available.](/docs/deployments/patterns/elastic-and-transient-environments/keeping-deployment-targets-up-to-date). ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Rolling deployments Source: https://octopus.com/docs/deployments/patterns/rolling-deployments-with-octopus.md [Rolling deployments](https://octopus.com/devops/software-deployments/rolling-deployment/) are a pattern whereby, instead of deploying a package to all servers at once, we slowly roll out the release by deploying it to each server one-by-one. In load balanced scenarios, this allows us to reduce overall downtime. Normally, when executing a deployment process with multiple steps, Octopus runs all steps **sequentially**; it waits for the first step to finish before starting the second, and so on. :::figure ![](/docs/img/deployments/patterns/images/normal-deployment.png) ::: NuGet package steps and [PowerShell steps](/docs/deployments/custom-scripts), however, identify machines via [target tags](/docs/infrastructure/deployment-targets/target-tags), which may be associated with multiple deployment targets. When a single step targets multiple machines, the step is run on those machines **in parallel**. So to recap: - Deployment steps are run in sequence - The actions performed by each step are performed in parallel on all deployment targets However, sometimes this isn't desired. If you are deploying to a farm of 10 web servers, it might be nice to deploy to one machine at a time, or to batches of machines at a time. This is called a **rolling deployment**. ## Configuring a rolling deployment {#configure-rolling-deployment} Rolling deployments can be configured on a PowerShell or NuGet package step by clicking **Configure a rolling deployment**. :::figure ![](/docs/img/deployments/patterns/images/rolling-deployments-select.png) ::: When configuring a rolling deployment, you specify a **window size**. :::figure ![](/docs/img/deployments/patterns/images/rolling-deployments-window-size.png) ::: The window size controls how many deployment targets can be deployed to at once. - A window size of 1 will deploy to a single deployment target at a time. Octopus will wait for the step to finish running on deployment target A before starting on deployment target B - A window size of 3 will deploy to a three deployment targets at a time. Octopus will wait for the step to finish running on deployment targets A, B *or* C before starting on deployment target D :::div{.hint} **Window size with Octopus.Action.MaxParallelism** If you include the variable `Octopus.Action.MaxParallelism` in your Project you will find the **Window size** value is no longer respected. This is expected behavior as Octopus also uses this variable to limit the number of deployment targets on which the rolling deployment step will run concurrently. To set a **Window size** for the rolling deployment, add a variable value to `Octopus.Action.MaxParallelism` and scope it to the rolling steps. A warning will also be printed in the Task Log. ::: ## Child steps {#child-steps} Rolling deployments allow you to wait for a step to finish on one deployment target before starting the step on the next deployment target. But what if you need to perform a series of steps on one target, before starting that series of steps on the next target? To support this, Octopus allows you to create **Child Steps**. First, open the menu for an existing step, and click **Add Child Step**. :::figure ![](/docs/img/deployments/patterns/images/rolling-deployments-child-step.png) ::: Octopus has numerous steps that support rolling deployments depending on your install version including: - Deploy to IIS step - Deploy a Windows Service step - Deploy a package step - Run a Script - Send an Email step - Manual intervention required step - Run an Azure PowerShell Script step - Deploy an Azure Resource Manager template step - Run a Service Fabric SDK PowerShell Script step :::figure ![](/docs/img/deployments/patterns/images/rolling-deployments-package-type.png) ::: After adding a child step, the deployment process will now show the step as containing multiple actions: :::figure ![](/docs/img/deployments/patterns/images/rolling-deployments-multiple-actions.png) ::: All child steps run on the same machine at the same time, and you can add more than one child step. You can also change the order that the steps are executed in using the **Reorder steps** link. :::figure ![](/docs/img/deployments/patterns/images/rolling-deployments-reorder.png) ::: You can edit the parent step to change the target tags that the steps run on or the window size. With this configuration, we run the entire website deployment step - taking the machine out of the load balancer, deploying the site, and returning it to the load balancer - on each machine in sequence as part of a rolling deployment step. ### Child step variable run conditions {#child-step-variable-run-conditions} It's possible to add variable [run conditions](/docs/projects/steps/conditions) to child steps in a rolling deployment. Both [variable expressions](/docs/projects/steps/conditions/#variable-expressions) and [machine-level](/docs/projects/steps/conditions/#machine-level-variable-expressions) variable expressions are supported. This allows you to customize the deployment process for machines taking part in a rolling deployment based on your specific needs. For example, if you are deploying a web service update to a web farm in a rolling deployment, you could sanity test the service in a step called `Sanity Test Web Service`. This step would run after the update step and set the service status in an output variable: ```powershell # $serviceStatus would be set by your own sanity test $serviceStatus = "OK" $shouldAddBackToWebFarm = $serviceStatus -eq "OK" Set-OctopusVariable -name "ShouldAddBackToWebFarm" -value "$shouldAddBackToWebFarm" ``` In a follow-up step, you can add it back to the web farm if the service status is positive with a machine-level variable run condition: ```powershell #{if Octopus.Action[Sanity Test Web Service].Output[#{Octopus.Machine.Name}].ShouldAddBackToWebFarm == "True"}True#{/if} ``` Octopus will evaluate the value of the [Output variable](/docs/projects/variables/output-variables) indicated by `#{Octopus.Machine.Name}` individually as the value will be specific to each machine in the rolling deployment. ## Rolling deployments with guided failures [Guided failures](/docs/releases/guided-failures) work perfectly with rolling deployments. If your deployment fails to one of the targets in your rolling deployment you can decide how to proceed. Imagine a scenario where you have three web servers in a load-balanced pool: `Web01`, `Web02` and `Web03`: 1. `Web01` is removed from the load balancer, the new release is deployed successfully and `Web01` is returned to the load-balanced pool. 2. `Web02` is removed from the load balancer, but the deployment of the new release fails. You can choose what happens next while `Web01` and `Web03` are still in the load-balanced pool. a. **Fail** the entire deployment so you can try again later. b. **Retry** the deployment to `Web02` as if the failure never happened. c. **Ignore** the error as if it never happened. d. **Exclude the machine from the deployment** continuing the deployment to the next machine in the rolling deployment. ## Learn more - [View rolling deployment examples on our samples instance](https://oc.to/PatternRollingSamplesSpace). - [Rolling deployment knowledge base articles](https://oc.to/RollingDeployTaggedKBArticles). - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # octopus Source: https://octopus.com/docs/octopus-rest-api/cli/octopus.md Work seamlessly with Octopus Deploy from the command line. ```text Usage: octopus [flags] octopus [command] Available Commands: account Manage accounts api Execute a raw API GET request build-information Manage build information channel Manage channels config Manage CLI configuration deployment-target Manage deployment targets environment Manage environments ephemeral-environment Manage ephemeral environments help Help about any command login Login to Octopus logout Logout of Octopus package Manage packages project Manage projects project-group Manage project groups release Manage releases runbook Manage runbooks space Manage spaces task Manage tasks tenant Manage tenants user Manage users worker Manage workers worker-pool Manage worker pools Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations -v, --version Prints version information Use "octopus [command] --help" for more information about a command. ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Octopus CLI Global Tool Source: https://octopus.com/docs/octopus-rest-api/octopus-cli/install-global-tool.md You can install the Octopus CLI as a .NET Core [Global Tool](https://docs.microsoft.com/en-us/dotnet/core/tools/global-tools). This requires that you have the [.NET Core SDK](https://dotnet.microsoft.com/download/) installed. ## Specific location A local installation can be done into a specified location using the `--tool-path` parameter. ```bash dotnet tool install Octopus.DotNet.Cli --tool-path /path/for/tool --version ``` This will install the Octopus CLI into the specified location and generate a platform specific executable called `dotnet-octo` into the specified location. In order to enable `dotnet` to find your custom tool location, you will need to add the tool location to the current environment path. **PowerShell** ```powershell $env:PATH = "your\tool\folder;" + $env:PATH ``` **Bash** ``` export PATH="$PATH:/your/tool/folder" ``` Once the tool folder is in the path you can run the Octopus CLI commands with .NET: `dotnet octo pack`. ## User installation In order to install The Octopus CLI for the current user you can do so by installing the tool globally using `--global` flag. ```bash dotnet tool install Octopus.DotNet.Cli --global --version 4.39.1 ``` You may also omit the `--version` parameter to install the latest version of the tools. Check the output to make sure the installation works correctly. After the installation has completed, you can run the following to verify the version the Octopus CLI that was installed: ``` dotnet octo --version ``` ### Updating In order to update to the latest version of the tool you can use the `dotnet update` command ```bash dotnet tool update Octopus.DotNet.Cli --global ``` If you would like to update to a specific version or downgrade to an older version you can do so by first uninstalling the tool and installing it again. ```bash dotnet tool uninstall Octopus.DotNet.Cli --global dotnet tool install Octopus.DotNet.Cli --global --version ``` ## Troubleshooting installation If you run into any issues installing the Octopus CLI as a global tool then these steps might help. ### Unable to install due to 401 unauthorized error If you receive an error that states `Response status code does not indicate success: 401 (Unauthorized)` this might be due to a nuget feed configured in the `nuget.config` file that requires authentication. A workaround is to try using the `--ignore-failed-sources` switch. For more information, see this [.NET SDK GitHub issue](https://github.com/dotnet/sdk/issues/9555). # Authentication provider compatibility Source: https://octopus.com/docs/security/authentication/auth-provider-compatibility.md Octopus ships with a number of authentication providers. The support for these providers differ between Octopus Server, [Octopus Cloud](/docs/octopus-cloud/) and the [Octopus Linux Container](/docs/installation/octopus-server-linux-container). Some authentication providers only work with Octopus Server, whilst others only work with Octopus Cloud. This page describes the compatibility of these providers in Octopus. :::div{.hint} Most of the authentication providers listed here are available in modern versions of Octopus. However, some are shipped with Octopus from a specific version. Where this is the case, the version will be noted alongside the provider. ::: ## Login support {#login-support} The following table shows login support for each authentication provider in Octopus Server, Octopus Cloud, and the Octopus Linux Container: | | Octopus Server | Octopus Cloud | Octopus Linux Container | |---------------------------------------|:------------------:|:---------------:|:-----------------------:| | Username and Password | | [**\***](#table-note-1) | | | Active Directory Authentication | | | | | Microsoft Entra ID Authentication | | | | | Google Workspace Authentication | | | | | LDAP Authentication (**2021.2+**)| | | | | Okta Authentication | | | | | GitHub | | [**\***](#table-note-1) | | | Guest Login | | | | **Note:** Entries marked with **\*** are only supported via [Octopus ID](/docs/security/authentication/octopusid-authentication). ## External groups and roles support {#external-groups-and-roles} Octopus allows [external groups and roles](/docs/security/users-and-teams/external-groups-and-roles) to be added as members of Teams in Octopus. The following table shows which authentication providers support this in Octopus Server, Octopus Cloud, and the Octopus Linux Container: | | Octopus Server | Octopus Cloud | Octopus Linux Container | |----------------------------------------------|:----------------------------------------:|:----------------------------------------:|:----------------------------------------:| | Username and Password | | | | | Active Directory Authentication | | | | | Microsoft Entra ID Authentication [**\***](#table-note-2) | | | | | Google Workspace Authentication | | | | | LDAP Authentication (**2021.2+**) | | | | | Okta Authentication [**†**](#table-note-3) | | | | | GitHub | | | | | Guest Login | | | | **\*** For Microsoft Entra ID users and groups, these must also be mapped in the Entra ID App Registration. Please read the [Mapping Entra ID users into Octopus teams](/docs/security/authentication/azure-ad-authentication/#mapping-aad-users-into-octopus-teams-optional) section for more details. For Octopus Cloud, external groups and roles cannot be configured for Azure AD when using [Octopus ID](/docs/security/authentication/octopusid-authentication). **†** For Okta groups to flow through to Octopus, you'll need to change the _Groups claim_ fields. Please read the [Okta group integration](/docs/security/authentication/okta-authentication/#okta-groups) section for more details. :::div{.hint} [Octopus ID](/docs/security/authentication/octopusid-authentication/) does not currently support configuring [external groups and roles](/docs/security/users-and-teams/external-groups-and-roles). ::: # Get the Raw Output From a Task Source: https://octopus.com/docs/support/get-the-raw-output-from-a-task.md When you contact Octopus Deploy support with a deployment-related issue, we'll sometimes ask you to send the full task log so that we can understand what went wrong. To download the task log do the following: 1. Select the deployment/task that you're having an issue with. 2. Select the **Task Log** tab. :::figure ![](/docs/img/support/images/tasklog.png) ::: 3. Click the **Download** button on the right to download the raw task log. :::figure ![](/docs/img/support/images/tasklog2.png) ::: Send this file to us, or attach it to your support request. :::div{.hint} You might want to open the file in a text editor, and redact any sensitive information like hostnames or company information, before sending the log to us. ::: # Maintenance Mode Source: https://octopus.com/docs/administration/managing-infrastructure/maintenance-mode.md :::div{.hint} Maintenance Mode is only available for self-hosted customers. [Octopus Cloud](/docs/octopus-cloud) instances will be updated in their specified [maintenance window](/docs/octopus-cloud/maintenance-window). ::: From time to time you will need to perform certain administrative activities on your Octopus Server, like [upgrading Octopus](/docs/administration/upgrading/) or [applying operating system patches](/docs/administration/managing-infrastructure/applying-operating-system-upgrades). You will typically want to schedule a maintenance window where you perform these activities, and Octopus Server helps with this by switching to **Maintenance Mode**. ## How does it work? In summary Maintenance Mode enables you to safely prepare your server for maintenance, allowing existing tasks to complete, and preventing changes you didn't expect. To enable or disable Maintenance Mode, go to **Configuration ➜ Maintenance**. :::figure ![Maintenance Mode Configuration](/docs/img/administration/managing-infrastructure/images/maintenance-mode.png) ::: Only users with the `Administer System` permission can enable/disable Maintenance Mode. Once Octopus is in Maintenance Mode: - Users with the `Administer System` permission can still do anything they want, just like normal. All other users are prevented from making changes, which includes queuing new deployments or other tasks. - The task queue will still be processed: - Tasks which were already running will run through to completion. - Tasks which were already queued (including [scheduled deployments](/docs/releases/#scheduling-a-deployment)) will be started and run through to completion. - System tasks will still be queued and execute at their scheduled intervals. These kinds of tasks can be ignored since they are designed to be safe to cancel at any point in time. ## What about High Availability? When you are using [Octopus High Availability](/docs/administration/high-availability) clusters, you will typically want to limit downtime to a minimum. You should enable Maintenance Mode when it is appropriate for the activity you need to perform. - [Applying operating system patches](/docs/administration/managing-infrastructure/applying-operating-system-upgrades) can be an online operation. - [Upgrading Octopus Server](/docs/administration/upgrading) is an online operation for patches, but you should schedule a small maintenance window for major and minor upgrades. - [Moving parts of your Octopus Server around](/docs/administration/managing-infrastructure/moving-your-octopus) will usually require a small maintenance window. - Other activities where you want to temporarily prevent changes to your Octopus Server will benefit from going into Maintenance Mode. # Manually uninstall Octopus Server Source: https://octopus.com/docs/administration/managing-infrastructure/server-configuration-and-file-storage/manually-uninstall-octopus-server.md When you uninstall the Octopus Server MSI, it automatically removes the application files from the installation folder, but that's it. This page describes how to manually clean up Octopus Server in part, or completely remove it from your server. ## Why would I want to clean up? {#ManuallyuninstallOctopusServer-WhywouldIwanttocleanupinthefirstplace?} :::div{.problem} In some of these scenarios you should make sure you have a recent backup of the **Octopus home directory** and your **Master Key** before continuing. Learn about [backup and restore](/docs/administration/data/backup-and-restore/) and [backing up your Master Key](/docs/security/data-encryption). If you want to completely remove this instance of Octopus Server and don't care about the configuration or data, you won't need to worry about having a backup or rollback strategy. ::: Here are a few reasons why you may want to completely remove Octopus Server from your computer: 1. You are moving Octopus Server to another server and want to clean up after the move is completed. Learn about how to move the Octopus Server to another server or VM. 2. You want to completely clean up an old version of Octopus Server after installing a newer version on another server. 3. You installed a trial of Octopus Server and want to completely uninstall the trial instance from your computer now that you've finished your trial. 4. You practiced an upgrade or new installation of Octopus Server and have finished with that instance of Octopus Server. :::div{.success} **Just upgraded from Octopus 2.6 and want to clean up?** If you have just completed an in-place upgrade from Octopus Server 2.6 to a modern version of Octopus Server there will be several folders and files left over that aren't used by newer versions of Octopus. We didn't remove these files in case you needed to roll back. Learn about [cleaning up after upgrading from Octopus 2.6](/docs/administration/managing-infrastructure/server-configuration-and-file-storage). ::: ## What does the Octopus Server MSI do? {#ManuallyuninstallOctopusServer-WhatdoestheOctopusServerMSIactuallydo?} The MSI will stop the Octopus Server windows service and remove the application files which are normally stored in your `%ProgramFiles%` folder. The MSI will leave all configuration required to run Octopus just like before you run the uninstaller. The installer behaves this way because the makes it easier for you to upgrade the application files for Octopus Server knowing your configuration and data are preserved. ## Manually removing all traces of Octopus Server {#ManuallyuninstallOctopusServer-ManuallyremovingalltracesofOctopusServer} :::div{.hint} **What are all these files anyhow?** Learn about [Octopus Server configuration and file storage](/docs/administration/managing-infrastructure/server-configuration-and-file-storage). ::: These steps will remove all traces of Octopus Server from your computer: 1. Before uninstalling the MSI, use the Octopus Server Manager to delete the Octopus Server instance from the computer. * This will stop and uninstall the Octopus Server Windows service. 2. Now uninstall the MSI. * This will remove the application files. 3. Find and delete the Octopus Home folder. By default, this is in **`%SYSTEMDRIVE%\Octopus`**. 4. Find and delete the Octopus registry entries from **`HKLM\SOFTWARE\Octopus`**. 5. Find and delete any Octopus folders from: * **`%ProgramData%\Octopus`** - could be used for log files when a Home Directory cannot be discovered * **`%LocalAppData%\Octopus`** - could be used for log files when a Home Directory cannot be discovered 6. Find and delete any Octopus certificates from the following certificate stores: * **`Local Computer\Octopus`** * **`Current User\Octopus`** - do this for any user accounts that have been used as the account for the Octopus Server Windows service # Upgrading a modern version of Octopus Source: https://octopus.com/docs/administration/upgrading/guide.md A modern version of Octopus Deploy is any version running on SQL Server. When Octopus Deploy was originally introduced, it ran on RavenDB. Octopus Deploy 3.x migrated from RavenDB to Microsoft SQL Server. This section contains guides to covering various use cases you might encounter when upgrading a modern version of Octopus Deploy. ## Upgrade scenarios The default upgrade scenario is an in-place upgrade. Thousands of customers upgrade every month without errors. However, no upgrade process is ever 100% error-free 100% of the time. The typical errors we see are: - Compatibility Errors: Upgrading to a new version isn't supported by a license limitation, host OS version deprecation, or SQL Server version deprecation. - Hyper-specific use cases: Windows runs a specific version of Windows without a random patch of .NET Framework installed. - Breaking changes introduced in the product: we do our best to minimize these, but they can happen. For example, Octopus Deploy 2019.1.0 introduced spaces and how teams and roles were assigned. Any API scripts manipulating teams had to be updated. Please choose from one of five common upgrade scenarios: - [Upgrading minor and patch releases](/docs/administration/upgrading/guide/upgrading-minor-and-patch-releases) - [Upgrading major releases](/docs/administration/upgrading/guide/upgrading-major-releases) - [Upgrading from Octopus 4.x or 2018.x to latest version](/docs/administration/upgrading/guide/upgrading-from-octopus-4.x-2018.x-to-modern) - [Upgrading from Octopus 3.x to latest version](/docs/administration/upgrading/guide/upgrading-from-octopus-3.x-to-modern) - [Upgrading host OS or .NET version](/docs/administration/upgrading/guide/upgrade-host-os-or-net) ## Mitigating risk The best way to mitigate risk is to automate the upgrade and/or create a test instance. Automation ensures all steps, including backups, are followed for every upgrade. A test instance allows you to test out upgrades and new features without affecting your main instance. We also recommend performing a System Integrity Check on your live instance before attempting to upgrade. If the integrity check fails, please contact [support](https://octopus.com/support) with the [raw output of the task](/docs/support/get-the-raw-output-from-a-task), and we can get that fixed for you. - [Perform a System Integrity Check](/docs/administration/managing-infrastructure/diagnostics) - [Automating upgrades](/docs/administration/upgrading/guide/automate-upgrades) - [Create a test instance](/docs/administration/upgrading/guide/creating-test-instance) # Upgrading old versions of Octopus Source: https://octopus.com/docs/administration/upgrading/legacy.md Upgrading from an older version of Octopus takes some care and preparation. Please take time to read the right guides for your situation, and plan your upgrade carefully. If you run into any problems along the way [we are here to help!](https://octopus.com/support) ## Supported upgrade paths {#upgrade-path} The supported upgrade paths are as follows: - Currently running `1.x`? Upgrade `1.x` to `1.6` to `2.6.5` to `2018.10 LTS` to any newer version - Currently running `2.x`? Upgrade `2.x` to `2.6.5` to `2018.10 LTS` to any newer version - Currently running `2.6.5`? Upgrade `2.6.5` to `2018.10 LTS` to any newer version - Currently running a modern version of Octopus like `3.x`, `4.x`, `2018.x`, or newer? Upgrade to any newer version :::div{.warning} **Broken upgrade paths** We track any unresolved upgrade problems which require special attention using [this GitHub issue](https://github.com/OctopusDeploy/Issues/issues/4979). ::: ## Detailed upgrade guides {#upgrade-guides} - Upgrade from `1.x` to `1.6` by [downloading and running the installer](https://octopus.com/downloads/1.6.3.1723). - Upgrade from `1.6` to `2.6.5` using [this detailed guide](/docs/administration/upgrading/legacy/upgrading-from-octopus-1.6-2.6.5). - Upgrade from `2.x` to `2.6.5` using [this detailed guide](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.x-2.6.5). - Upgrade from `2.6.5` to `2018.10 LTS` using [this detailed guide](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts). - Upgrade any modern version of Octopus using [this detailed guide](/docs/administration/upgrading/guide). # Manual upgrade Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/manual-upgrade.md You can upgrade from **Octopus 2.6.5** to **Octopus 2018.10 LTS** by downloading the latest [MSI's for both Octopus and Tentacle](https://octopus.com/download), and installing them manually. If you're working with a large number of Tentacles, see the section on [upgrading larger installations](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts). ## Summary 1. Backup your **Octopus 2.6.5** database and Master Key. 2. Install **Octopus 2018.10 LTS** on your Octopus Server. 3. Migrate your data from **Octopus 2.6.5** to **Octopus 2018.10 LTS**. 4. Install **the latest version of Tentacle** on your deployment targets. 5. Verify the connectivity between the **Octopus 2018.10 LTS** Server and your Tentacles. 6. **[Optional]** Clean up your Octopus Home folder, follow the instructions on this [page](/docs/administration/managing-infrastructure/server-configuration-and-file-storage\#ServerConfigurationAndFileStorage-CleanUp). ## Step by step To perform an in-place upgrade, follow these steps: ### 1. Back up your Octopus 2.6.5 database and Master Key See the [Backup and restore](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/backup-2.6/)[ page for instructions on backing up your database.](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/backup-2.6) ### 2. Install Octopus 2018.10 LTS on your Octopus Server :::div{.success} **Upgrade to the latest version** When upgrading to **Octopus 2018.10 LTS** please use the latest version available. We have been constantly improving the **Octopus 2.6.5** to **Octopus 2018.10 LTS** data migration process while adding new features and fixing bugs. ::: See the [Installing Octopus 2018.10 LTS](/docs/installation) page for instructions on installing a new **Octopus 2018.10 LTS** instance. After installing the MSI, you will be presented with an upgrade page. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278008.png) ::: Click "Get started..." and set up your database connection. You may need to grant permission to the NT AUTHORITY\SYSTEM account at this stage. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278007.png) ::: Click Next, and then Install to install the **Octopus 2018.10 LTS** server over the **Octopus 2.6.5** instance. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278006.png) ::: ### 3. Restore the Octopus 2.6.5 database using the Migration Tool After upgrading, the Octopus Manager will prompt to import your **Octopus 2.6.5** database. Click the "Import data..." button and follow the prompts to import your **Octopus 2.6.5** data. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278005.png) ::: See the [Migrating data from Octopus 2.6.5 to 2018.10 LTS](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/migrating-data-from-octopus-2.6.5-2018.10lts) page for more detailed instructions on importing your **Octopus 2.6.5** database backup into **Octopus 2018.10 LTS**. :::div{.hint} **Migration taking a long time?** By default, we migrate everything from your backup including historical data. You can use the `maxage=` argument when executing the migrator to limit the number of days to keep. For example: `maxage=90` will keep 90 days of historical data ignoring anything older. To see the command syntax click the **Show script** link in the wizard ::: :::div{.hint} **Using the built-in Octopus NuGet repository?** If you use the built-in [Octopus NuGet repository](/docs/packaging-applications/package-repositories) you will need to move the files from your **Octopus 2.6.5** server to your **Octopus 2018.10 LTS** server. They are not part of the backup. In a standard **Octopus 2.6.5** install the files can be found under `C:\Octopus\OctopusServer\Repository\Packages` You will need to transfer them to the new server to `C:\Octopus\Packages`. Once the files have been copied, go to **Library ➜ Packages ➜ Package Indexing** and click the `RE-INDEX NOW` button. This process runs in the background, so if you have a lot of packages it could take a while (5-20 mins) to show in the UI or be usable for deployments. ::: ### 4. Install the latest Tentacle MSI At this point, the machines should appear in your Environments page inside **Octopus 2018.10 LTS**, but a health check will fail - the communication protocol used by modern Octopus Servers isn't compatible with **Tentacle 2.6**. On each machine that ran **Tentacle 2.6**, connect to the machine, and install the latest Tentacle MSI. ### 5. Verify connectivity between the 2018.10 LTS server and your Tentacles Log in to your new **Octopus 2018.10 LTS** server and run health checks on all of your environments. If the upgrade completed successfully, they should succeed. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278009.png) ::: If one or more health checks do not succeed after a few attempts, see the Troubleshooting section to identify possible issues. # What is platform engineering? Source: https://octopus.com/docs/best-practices/platform-engineering/what-is-pe.md Platform engineering is: * A central repository of architectural decisions made by DevOps teams * An Internal Developer Platform (IDP) that allows those decisions to be implemented throughout DevOps teams at scale * Feedback processes that allow architectural decisions to be improved over time While platform engineering is not limited to CI/CD pipelines, CI/CD platforms provide a convenient foundation on which to implement an IDP because: * They have already been deployed into enterprises on supported infrastructure * DevOps teams already know how to use them * They have rich CLIs and APIs to support automation * They manage execution environments in which to run automated tasks * They already have access to existing DevOps systems Octopus can function as an IDP through a combination of IaC (with the [Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest/docs)), Git based workflows (with [Config-as-code](/docs/projects/version-control)), and specially designed step templates to deploy and track changes to deployment projects and runbooks. # Accessing container details Source: https://octopus.com/docs/deployments/docker/accessing-container-details.md When creating a container or network via one of the new Docker steps, you may wish to use details of the resulting resource in a subsequent step. All information about the networking configuration, volumes, environment variable and hardware resource allocation can be obtained for the container via the `docker inspect` command and similar information for the network via the `docker network inspect` command. To allow access to this information Octopus invokes this command right after creating a container (or network) which results in a large detailed JSON array (since you can request multiple container details from a single invocation) that will look something like the examples below. This output is then returned back to the server and processed as an [Output Variable](/docs/projects/variables/output-variables) with the format `#{Octopus.Action[].Output.Docker.Inspect}`. :::div{.warning} **Inspection timing and relevance** Keep in mind when using the results of Octopus Deploy's automatic inspection that this is **invoked just after the resource is created**. This means that 1. If your container immediately exits then some information such as the IP address used may be out of date. 2. If your container state changes *after* this point in time, such as a new network or volume is attached, then the information may be out of date. ::: :::div{.success} **Advanced JSON parsing in variables** Variables that are a JSON object can now be [parsed natively](/docs/projects/variables/variable-substitutions) and sub properties within the document can now be used for general variable substitution. This makes accessing information about your container from subsequent steps trivial. ::: ## Common examples ### Creating a Network then adding a container A typical project may involve one step that first creates a network, and then creates a container that is attached to that network. Assuming that your subsequent Docker Run step needs to be connected to that network, you would select the network type *Custom Network* and then for the network name use the name of the network generated from the previous step that is now stored in its inspection output variable ```powershell #{Octopus.Action[Create Network Step Name].Output.Docker.Inspect.Name} ``` :::figure ![](/docs/img/deployments/docker/images/5865817.png) ::: ### Obtain container IP address inside custom network Once a container has started and is attached to a network, an IP address will be assigned to it. Since the container may get attached to more than one network, the network details are stored in the JSON as an object indexed by the network name. When trying to get the IP address assigned to a container which has been added to a custom network, there are two steps to the variable substitution. First we need the network name, then we need to inspect the container and find the network information that corresponds to that network name. ```powershell #{Octopus.Action[Create Container Step Name].Output.Docker.Inspect.NetworkSettings.Networks[#{Octopus.Action[Create Network Step Name].Output.Docker.Inspect.Name}].IPAddress} ``` while this variable might look complex, you should be able to see the two aforementioned steps involved. The network name inside the `Networks` index is first resolved, and then the network information is extracted from the container inspection variable. :::div{.hint} **Output variable in project variables** You may find that you want to access variables from the inspection output several times and find it a bit cumbersome to keep typing out their full value. To simply things, you might find it helpful to create a project variable with the value of the output variable. For instance, in the examples outlined above, the network name was needed several times. In this case It might be useful to create and use a project variable with the value (taking into consideration things like scoping) ```powershell #{Octopus.Action[Create Network Step Name].Output.Docker.Inspect.Name} ``` ::: ## Sample inspection output The following JSON objects are real outputs from docker inspect commands to provide some indication of what to expect in the output variable. **docker inspect ** ```js [ { "Id": "dd6c3f3f533dcd0df76e4b729aca3565bc2b0f1c1bfb09143a5445a138af9179", "Created": "2016-09-01T05:52:00.527623518Z", "Path": "/entrypoint.sh", "Args": [ "/etc/docker/registry/config.yml" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 22205, "ExitCode": 0, "Error": "", "StartedAt": "2016-09-02T01:35:33.763071223Z", "FinishedAt": "2016-09-02T01:34:50.852301712Z" }, "Image": "sha256:c6c14b3960bdf9f5c50b672ff566f3dabd3e450b54ae5496f326898513362c98", "ResolvConfPath": "/var/lib/docker/containers/dd6c3f3f533dcd0df76e4b729aca3565bc2b0f1c1bfb09143a5445a138af9179/resolv.conf", "HostnamePath": "/var/lib/docker/containers/dd6c3f3f533dcd0df76e4b729aca3565bc2b0f1c1bfb09143a5445a138af9179/hostname", "HostsPath": "/var/lib/docker/containers/dd6c3f3f533dcd0df76e4b729aca3565bc2b0f1c1bfb09143a5445a138af9179/hosts", "LogPath": "/var/lib/docker/containers/dd6c3f3f533dcd0df76e4b729aca3565bc2b0f1c1bfb09143a5445a138af9179/dd6c3f3f533dcd0df76e4b729aca3565bc2b0f1c1bfb09143a5445a138af9179-json.log", "Name": "/registry", "RestartCount": 0, "Driver": "aufs", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": { "5000/tcp": [ { "HostIp": "", "HostPort": "5000" } ] }, "RestartPolicy": { "Name": "always", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": -1, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0 }, "GraphDriver": { "Name": "aufs", "Data": null }, "Mounts": [ { "Name": "7e288d82bca0014180c342545e28426e93cb2268c4391f7c167fe365ace71b0d", "Source": "/var/lib/docker/volumes/7e288d82bca0014180c342545e28426e93cb2268c4391f7c167fe365ace71b0d/_data", "Destination": "/var/lib/registry", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "666c3f3f533d", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "5000/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/etc/docker/registry/config.yml" ], "Image": "registry:2", "Volumes": { "/var/lib/registry": {} }, "WorkingDir": "", "Entrypoint": [ "/entrypoint.sh" ], "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "c4d515974a7447100a988b428681f319d6bed307b9d41878c65f01c350ed65f9", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "5000/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "5000" } ] }, "SandboxKey": "/var/run/docker/netns/c4d515974a74", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "511fc829515ff45908b03fec69bcdc0ff929a717534ec0807de5aa56f291037c", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:09:00:02", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "94986a009f24f0eca0281a61a42f109a31591641efe03c04dacf8584bf379ca8", "EndpointID": "505fc8295d5ff45908b03fec69bcdc0ff929a717534ec0807de5aa56f291037c", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:02" } } } } ] ``` **docker network inspect ** ```js [ { "Name": "LXRR3", "Id": "dd00393535a8198857aa43851d022a2cccca2810c5f406f515678eca64d19cdf", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.21.0.0/16", "Gateway": "172.21.0.1/16" } ] }, "Internal": false, "Containers": {}, "Options": {}, "Labels": { "Octopus.Action.Id": "482e6219-d7b0-4e38-afed-bba97e6a8c62", "Octopus.Deployment.Id": "Deployments-2842", "Octopus.Environment.Id": "Environments-1", "Octopus.Project.Id": "Projects-61", "Octopus.Release.Number": "0.0.47" } } ] ``` ## Learn more - [Docker blog posts](https://octopus.com/blog/tag/docker/1) # Create and push an ASP.NET Core project Source: https://octopus.com/docs/deployments/nginx/create-and-push-asp.net-core-project.md The sample project for this guide is the [Angular project template with ASP.NET Core](https://docs.microsoft.com/en-us/aspnet/core/client-side/spa/angular?view=aspnetcore-2.1) application. The template consists of an ASP.NET Core project to act as an API backend and an Angular CLI project to act as a UI. The base project has been modified slightly to host the Angular CLI project outside of the ASP.NET Core project to enable us to configure NGINX both as a reverse proxy to the ASP.NET Core project while also serving the Angular CLI project as static content from the file system. ## Upload the package to the built-in repository Firstly we need to make the package available for Octopus to deploy. :::div{.success} We've crafted and packaged v1.0.0 of this sample application for you to try out (see the link below). Alternatively you can create your own application and [package the application](/docs/packaging-applications) yourself to try it out. Click [here](#publishing-and-packing-the-website) for steps to publish and package the ASP.NET Core project. ::: 1. Download [NginxSampleWebApp.1.0.0.zip](/docs/attachments/nginxsamplewebapp.1.0.0.zip). 2. [Upload it to the Octopus Built-In repository](/docs/packaging-applications/package-repositories/built-in-repository/#pushing-packages-to-the-built-in-repository) (you can do this by going to **Deploy ➜ Manage ➜ Packages** and clicking the **Upload package** button). ## Publishing and packing the website {#publishing-and-packing-the-website} ```powershell # Publish the application to a folder dotnet publish source/NginxSampleWebApp --output published-app --configuration Release # Package the folder into a ZIP octopus package zip create --id NginxSampleWebApp --version 1.0.0 --base-path published-app ``` :::div{.hint} If you are using the built-in repository, you can create a [zip file](/docs/packaging-applications/create-packages/octopus-cli#create-zip-packages) instead. The generated nupkg or zip file should then be then be [pushed to a repository](/docs/packaging-applications/package-repositories). ::: ## Learn more - Generate an Octopus guide for [NGINX and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=NGINX). # EKS reference architecture Source: https://octopus.com/docs/getting-started/reference-architectures/eks-reference-architecture.md ## EKS reference architecture The [Octopus - EKS Reference Architecture](https://library.octopus.com/step-templates/87b2154a-5c8d-4c31-9680-575bb6df9789/actiontemplate-octopus-eks-reference-architecture) step populates an existing Octopus space with deployment projects demonstrating how DevOps teams can deploy applications to the AWS EKS platform. ### Supporting Videos [Deploying to Kubernetes at scale with Octopus](https://www.youtube.com/watch?v=5q7s3vaGUN8) ### Configuring the step Hosted Octopus users should use the `Hosted Ubuntu` worker pool and run the step with the `octopuslabs/terraform-workertools` container image accessed via the `Container Images` feed. On-premises Octopus users need to ensure the step is run on a worker with a recent version of Terraform installed, or can use the `octopuslabs/terraform-workertools` container image on a worker with Docker installed. The step exposes a number of options, typically requesting credentials to the various platforms that are configured to support EKS deployments: * `AWS Access Key` and `AWS Secret Key` require the [access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) of the user that will create the EKS cluster. * `Docker Hub Username` and `Docker Hub Password` require the credentials of a [Docker Hub user](https://docs.docker.com/docker-id/) that is used to access sample Docker images from public DockerHub repositories. These credentials are also used by a sample GitHub Actions workflow that publishes Docker images. * `GitHub Access Token` requires the [GitHub access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) of a user that is used to create a new GitHub repository holding a sample application. * `Octopus API Key` requires an [API key](https://octopus.com/docs/octopus-rest-api/how-to-create-an-api-key) to the Octopus instance where the reference architecture projects and supporting resources are created. * `Octopus Space ID` requires the space ID where the reference architecture projects and supporting resources are created. Leave the default value to populate the same space as the runbook. * `Octopus Server URL` requires the URL of the Octopus instance where the reference architecture projects and supporting resources are created. Leave the default value to populate the same instance as the runbook. * `Optional Terraform Apply Args` allows custom arguments to be passed to the `terraform apply` command. The Terraform module applied by this step exposes a number of optional variables that can be defined as apply arguments. These arguments can be defined by setting this field to a value like `-var=project_template_project_name=renamed -var=infrastructure_project_name=renamed2 -var=frontend_project_name=renamed3 -var=products_project_name=renamed4 -var=audits_project_name=renamed5`: * `infrastructure_project_name` defines the name of the `_ AWS EKS Infrastructure` project * `project_template_project_name` defines the name of the `Docker Project Templates` project * `frontend_project_name` defines the name of the `EKS Octopub Frontend` project * `products_project_name` defines the name of the `EKS Octopub Products` project * `audits_project_name` defines the name of the `EKS Octopub Audits` project * `Optional Terraform Init Args` allows custom argument to be passed to the `terraform init` command. Leave this field blank unless you have a specific use case. ### Reference projects The step creates a number of reference projects demonstrating how to deploy applications to an EKS cluster. The `_ AWS EKS Infrastructure` project contains a runbook called `Create EKS Cluster`. This runbook creates a [Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) EKS cluster with the supplied name in the supplied region and then installs the NGINX ingress controller on it. The script then creates a new [Kubernetes target](/docs/kubernetes/targets/kubernetes-api) using [dynamic infrastructure](/docs/infrastructure/deployment-targets/dynamic-infrastructure). This cluster can be destroyed with the `Delete EKS Cluster` runbook. The `EKS Octopub Audits`, `EKS Octopub Frontend`, `EKS Octopub Products` projects deploy the [Octopub](https://github.com/OctopusSolutionsEngineering/Octopub) sample application to the EKS cluster, performs a smoke test, and scans the [SBOM](https://www.cisa.gov/sbom) associated with each image using [Trivy](https://aquasecurity.github.io/trivy/). Each of these projects have a number of supporting runbooks to inspect Kubernetes resources. In addition, there are two runbooks called `Scale Pods to One` and `Scale Pods to Zero` that scale the number of pods associated with the deployment. These runbooks are expected to be triggered in the morning and afternoon to scale non-production environments up and down. Because the pods are run on Fargate nodes, scaling a deployment to zero removes the compute costs associated with them. The `_ Deploy EKS Octopub Stack` project uses the [Deploy a release](/docs/projects/coordinating-multiple-projects/deploy-release-step) step to orchestrate the deployment of the individual microservices that make up the Octopub sample application. Orchestration projects provide a convenient way of promoting multiple related releases between environments in a predefined order, which may be required when applications are tightly bound or a well-defined set of release versions must be installed as a group. The `Docker Project Templates` project contains a runbook called `Create Template Github Node.js Project` that: 1. Creates a new GitHub repository 2. Adds [Github Actions secrets](https://docs.github.com/en/rest/actions/secrets) to allow [workflows](https://docs.github.com/en/actions/using-workflows/about-workflows) to interact with the Octopus server and the DockerHub repository 3. Populates the repo with a sample Node.js web application and GitHub Actions workflow to build the application, push it to DockerHub, and create a release in Octopus This runbook is an example of platform engineering where DevOps teams can bootstrap sample applications with best practices such as versioning, security scanning, and CI/CD pipelines provided as part of a common base template. ### Feature branches This reference architecture provides the ability to deploy feature branch builds of each of the microservices. The implementation satisfies these requirements: * Feature branches are deployed to their own namespace * Feature branch builds can not be promoted to production * The feature branch environment is initially populated with the set of applications in another environment * Feature branch artifacts are identified by the [prerelease component of their version](https://semver.org/) e.g. `myfeature` in the version `0.2.8-myfeature.4` Feature branch deployments are performed in the environment called `Feature Branch`. This environment is defined as an optional phase after `Development` for regular mainline deployments. Typically, mainline deployments will skip the `Feature Branch` environment, but it is possible to promote deployments from `Development` to `Feature Branch` in order to recreate the `Development` environment for the purposes of testing a feature branch build. Each application deployment project has two channels: `Mainline` and `Feature Branch`. The `Mainline` channel requires containers to have no prerelease component in their tags. The `Feature Branch` channel has no restrictions, allowing both mainline and feature branch builds to be deployed. The `Feature Branch` channel is configured to use the `Feature Branch` lifecycle, which only contains the `Feature Branch` environment. This ensures that feature branch builds can not be promoted to production. The typical workflow is this: 1. Using the `_ Deploy EKS Octopub Stack` orchestration project, the current state of the `Development` environment is promoted to the `Feature Branch` environment. The namespace hosting the feature branch is prompted for, just before the release is deployed. This effectively recreates the `Development` environment in a new namespace. 2. The feature branch build of the individual microservice being tested is then manually deployed using the `Feature Branch` channel. 3. The end result is a copy of the mainline applications deployed to a feature branch namespace with a single feature branch build of the microservice being tested. This allows the feature branch microservice to be tested in isolation with a complete microservice stack. # Installing the Tentacle via DSC in an ARM template with Octopus Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc.md The following example shows how to install a Tentacle during virtual machine (VM) provisioning with [Desired State Configuration](https://docs.microsoft.com/powershell/scripting/dsc/overview/overview) (DSC). 1. Download the latest release of the OctopusDSC from the [OctopusDSC repo](https://github.com/OctopusDeploy/OctopusDSC/releases) and extract it into a new folder. 2. Create a configuration file (eg `OctopusTentacle.ps1`) next to the `OctopusDSC` folder: ```powershell configuration OctopusTentacle { param ($ApiKey, $OctopusServerUrl, $Environments, $Roles, $ServerPort) Import-DscResource -Module OctopusDSC Node "localhost" { cTentacleAgent OctopusTentacle { Ensure = "Present" State = "Started" # Tentacle instance name. Leave it as 'Tentacle' unless you have more # than one instance Name = "Tentacle" # Registration - all parameters required ApiKey = $ApiKey OctopusServerUrl = $OctopusServerUrl Environments = $Environments Roles = $Roles # How Tentacle will communicate with the server CommunicationMode = "Poll" ServerPort = $ServerPort # Where deployed applications will be installed by Octopus DefaultApplicationDirectory = "C:\Applications" # Where Octopus should store its working files, logs, packages etc TentacleHomeDirectory = "C:\Octopus" } } } ``` 3. Create a new zip file containing both the `OctopusDSC` folder and the `OctopusTentacle.ps1` file. Below is an example of what your folder should look like before you zip it up. :::div{.hint} If you build the ZIP file incorrectly the provisioning of the DSC extension and Tentacle application install is likely to fail. ::: :::figure ![A brief description of the image](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/images/dsc-folder-structure-example.png) ::: 4. Upload the zip file to a location accessible during VM provisioning. You can either use a public location, or a private location protected with a [SAS token](https://docs.microsoft.com/azure/storage/storage-dotnet-shared-access-signature-part-1). 5. Create an ARM template (eg `arm-template.json`) that creates your virtual machine as normal. eg: ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "vmAdminUsername": { "type": "string", "metadata": { "description": "Admin username for the Virtual Machine." } }, "vmAdminPassword": { "type": "securestring", "metadata": { "description": "Admin password for the Virtual Machine." } }, "vmDnsName": { "type": "string", "metadata": { "description": "Unique DNS Name for the Public IP used to access the Virtual Machine." } }, "vmSize": { "defaultValue": "Standard_D2_v2", "type": "string", "metadata": { "description": "Size of the Virtual Machine" } }, "tentacleOctopusServerUrl": { "type": "string", "metadata": { "description": "The URL of the octopus server with which to register" } }, "tentacleApiKey": { "type": "securestring", "metadata": { "description": "The Api Key to use to register the Tentacle with the server" } }, "tentacleCommunicationMode": { "defaultValue": "Listen", "allowedValues": [ "Listen", "Poll" ], "type": "string", "metadata": { "description": "The type of Tentacle - whether the Tentacle listens for requests from server, or actively polls the server for requests" } }, "tentaclePort": { "defaultValue": 10933, "minValue": 0, "maxValue": 65535, "type": "int", "metadata": { "description": "The port on which the Tentacle should listen, when CommunicationMode is set to Listen, or the port on which to poll the server, when CommunicationMode is set to Poll. By default, Tentacle's listen on 10933 and poll the Octopus Server on 10943." } }, "tentacleRoles": { "type": "string", "metadata": { "description": "A comma delimited list of roles to apply to the Tentacle" } }, "tentacleEnvironments": { "type": "string", "metadata": { "description": "A comma delimited list of environments in which the Tentacle should be placed" } }, "tentaclePublicHostNameConfiguration": { "defaultValue": "PublicIP", "allowedValues": [ "PublicIP", "FQDN", "ComputerName", "Custom" ], "type": "string", "metadata": { "description": "How the Octopus Server should contact the Tentacle. Only required when CommunicationMode is 'Listen'." } }, "tentacleCustomPublicHostName": { "type": "string", "defaultValue": "", "metadata": { "description": "The custom public host name that the Octopus Server should use to contact the Tentacle. Only required when communicationMode is 'Listen' and publicHostNameConfiguration is 'Custom'." } } }, "variables": { "namespace": "octopus", "location": "[resourceGroup().location]", "tags": { "vendor": "Octopus Deploy", "description": "Example deployment of Octopus Tentacle to a Windows Server." }, "diagnostics": { "storageAccount": { "name": "[concat('diagnostics', uniquestring(resourceGroup().id))]" } }, "networkSecurityGroupName": "[concat(variables('namespace'), '-nsg')]", "publicIPAddressName": "[concat(variables('namespace'), '-publicip')]", "vnet": { "name": "[concat(variables('namespace'), '-vnet')]", "addressPrefix": "10.0.0.0/16", "subnet": { "name": "[concat(variables('namespace'), '-subnet')]", "addressPrefix": "10.0.0.0/24" } }, "nic": { "name": "[concat(variables('namespace'), '-nic')]", "ipConfigName": "[concat(variables('namespace'), '-ipconfig')]" }, "vmName": "[concat(variables('namespace'),'-vm')]" }, "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2021-04-01", "name": "[variables('diagnostics').storageAccount.name]", "location": "[variables('location')]", "tags": { "vendor": "[variables('tags').vendor]", "description": "[variables('tags').description]" }, "kind": "Storage", "sku": { "name": "Standard_LRS" }, "properties": {} }, { "type": "Microsoft.Network/networkSecurityGroups", "apiVersion": "2021-02-01", "name": "[variables('networkSecurityGroupName')]", "location": "[variables('location')]", "tags": { "vendor": "[variables('tags').vendor]", "description": "[variables('tags').description]" }, "properties": { "securityRules": [ { "name": "allow_rdp", "properties": { "description": "Allow inbound RDP", "protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "3389", "sourceAddressPrefix": "*", "destinationAddressPrefix": "*", "access": "Allow", "priority": 123, "direction": "Inbound" } } ] } }, { "type": "Microsoft.Network/publicIPAddresses", "name": "[variables('publicIPAddressName')]", "apiVersion": "2021-02-01", "location": "[variables('location')]", "tags": { "vendor": "[variables('tags').vendor]", "description": "[variables('tags').description]" }, "properties": { "publicIPAllocationMethod": "Dynamic", "dnsSettings": { "domainNameLabel": "[parameters('vmDnsName')]" } } }, { "type": "Microsoft.Network/virtualNetworks", "name": "[variables('vnet').name]", "apiVersion": "2021-02-01", "location": "[variables('location')]", "tags": { "vendor": "[variables('tags').vendor]", "description": "[variables('tags').description]" }, "dependsOn": [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('networkSecurityGroupName'))]" ], "properties": { "addressSpace": { "addressPrefixes": [ "[variables('vnet').addressPrefix]" ] }, "subnets": [ { "name": "[variables('vnet').subnet.name]", "properties": { "addressPrefix": "[variables('vnet').subnet.addressPrefix]", "networkSecurityGroup": { "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]" } } } ] } }, { "type": "Microsoft.Network/networkInterfaces", "name": "[variables('nic').name]", "apiVersion": "2021-02-01", "location": "[variables('location')]", "tags": { "vendor": "[variables('tags').vendor]", "description": "[variables('tags').description]" }, "dependsOn": [ "[concat('Microsoft.Network/virtualNetworks/', variables('vnet').name)]", "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", "[concat('Microsoft.Network/networkSecurityGroups/', variables('networkSecurityGroupName'))]" ], "properties": { "ipConfigurations": [ { "name": "[variables('nic').ipConfigName]", "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" }, "subnet": { "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', variables('vnet').name), '/subnets/', variables('vnet').subnet.name)]" } } } ] } }, { "type": "Microsoft.Compute/virtualMachines", "name": "[variables('vmName')]", "apiVersion": "2021-04-01", "location": "[variables('location')]", "tags": { "vendor": "[variables('tags').vendor]", "description": "[variables('tags').description]" }, "dependsOn": [ "[concat('Microsoft.Storage/storageAccounts/', variables('diagnostics').storageAccount.name)]", "[concat('Microsoft.Network/networkInterfaces/', variables('nic').name)]" ], "properties": { "hardwareProfile": { "vmSize": "[parameters('vmSize')]" }, "osProfile": { "computerName": "[variables('vmName')]", "adminUsername": "[parameters('vmAdminUsername')]", "adminPassword": "[parameters('vmAdminPassword')]" }, "storageProfile": { "imageReference": { "publisher": "MicrosoftWindowsServer", "offer": "WindowsServer", "sku": "2016-Datacenter", "version": "latest" }, "osDisk": { "createOption": "FromImage", "caching": "ReadWrite", "managedDisk": { "storageAccountType": "Standard_LRS" } } }, "networkProfile": { "networkInterfaces": [ { "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nic').name)]" } ] }, "diagnosticsProfile": { "bootDiagnostics": { "enabled": true, "storageUri": "[concat('http://', variables('diagnostics').storageAccount.name, '.blob.core.windows.net')]" } } } }, { "type": "Microsoft.Compute/virtualMachines/extensions", "name": "[concat(variables('vmName'),'/dscExtension')]", "apiVersion": "2021-04-01", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]" ], "properties": { "publisher": "Microsoft.Powershell", "type": "DSC", "typeHandlerVersion": "2.77", "autoUpgradeMinorVersion": true, "forceUpdateTag": "2", "settings": { "configuration": { "url": "https://myfilehost.example.com/OctopusTentacle.zip", "script": "OctopusTentacle.ps1", "function": "OctopusTentacle" }, "configurationArguments": { "ApiKey": "[parameters('tentacleApiKey')]", "OctopusServerUrl": "[parameters('tentacleOctopusServerUrl')]", "Environments": "[parameters('tentacleEnvironments')]", "Roles": "[parameters('tentacleRoles')]", "ServerPort": "[parameters('tentaclePort')]" } }, "protectedSettings": null } } ] } ``` If you are using your own template, and not the sample above, you can just add the resource to your existing template: ```json { "type": "Microsoft.Compute/virtualMachines/extensions", "name": "[concat(variables('vmName'),'/dscExtension')]", "apiVersion": "2021-04-01", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]" ], "properties": { "publisher": "Microsoft.Powershell", "type": "DSC", "typeHandlerVersion": "2.77", "autoUpgradeMinorVersion": true, "forceUpdateTag": "2", "settings": { "configuration": { "url": "https://myfilehost.example.com/OctopusTentacle.zip", "script": "OctopusTentacle.ps1", "function": "OctopusTentacle" }, "configurationArguments": { "ApiKey": "[parameters('tentacleApiKey')]", "OctopusServerUrl": "[parameters('tentacleOctopusServerUrl')]", "Environments": "[parameters('tentacleEnvironments')]", "Roles": "[parameters('tentacleRoles')]", "ServerPort": "[parameters('tentaclePort')]" } }, "protectedSettings": null } } ``` Note that if you are using a private Azure storage location that requires a SAS Token, add this under `protectedSettings` as `configurationUrlSasToken`. 6. Create an ARM template properties file (eg `arm-template.properties.json`) with the parameters you need: ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "vmAdminUsername": { "value": "AdminUser" }, "vmAdminPassword": { "value": "your-password" }, "vmDnsName": { "value": "your-tentacle-vm" }, "vmSize": { "value": "Standard_D2_v2" }, "tentacleOctopusServerUrl": { "value": "https://octopus.example.com" }, "tentacleApiKey": { "value": "API-ABCDEFG1234567890" }, "tentacleCommunicationMode": { "value": "Poll" }, "tentaclePort": { "value": 10943 }, "tentacleRoles": { "value": "app-server" }, "tentacleEnvironments": { "value": "Development" }, "tentaclePublicHostNameConfiguration": { "value": "PublicIP" }, "tentacleCustomPublicHostName": { "value": "" } } } ``` To deploy the template, you can use the [Azure CLI](https://docs.microsoft.com/cli/azure/install-az-cli2): ```bash az login az account set --subscription 'xxxxxxxxxxx' az group create --name "OctopusDeployTentacle" --location "Australia East" az deployment group create \ --name "DeployTentacle" \ --resource-group "OctopusDeployTentacle" \ --template-file "arm-template.json" \ --parameters "@arm-template.properties.json" ``` ## Troubleshooting To troubleshoot the installation, you can use [`Start-Transcript`](https://docs.microsoft.com/powershell/module/microsoft.powershell.host/start-transcript?view=powershell-7.1) to write the PowerShell session to a file. If you have remote access to the machine you are troubleshooting the installation for, these two commands may offer diagnostic information about the state of DSC: * The [`Test-DscConfiguration`](https://docs.microsoft.com/powershell/module/psdesiredstateconfiguration/test-dscconfiguration?view=powershell-5.1) command will show details of whether the desired state matches that on the machine. * The [`(Get-DscConfiguration).ResourcesNotInDesiredState`](https://docs.microsoft.com/powershell/module/PSDesiredStateConfiguration/Get-DscConfiguration?view=powershell-5.1) command will show resources that are not in the desired state. # Installation requirements Source: https://octopus.com/docs/installation/requirements.md If you are hosting your Octopus Server yourself, these are the minimum requirements. ## Operating system Octopus Server can be hosted on either: - A Microsoft Windows operating system - In a [Linux](/docs/installation/octopus-server-linux-container) container. However, once your Octopus Server is up and running, you can deploy to Windows servers, Linux servers, Microsoft Azure, AWS, GCP, Cloud Regions, or even an offline package drop. ## Supported Octopus Deploy Server Versions Each self-hosted major.minor release of Octopus Deploy will receive *critical patches and support* for a period of **six months.** For example, 2025.4 was released in December 2025 and will be supported through May 2026. All new releases of Octopus Deploy will run in Octopus Cloud first for at least one quarter. As a result, Octopus Cloud is always at least one version ahead of the self-hosted version. Because of that, we always recommend using the latest available release for your self-hosted installation of Octopus. Please see the [Octopus.com/downloads](https://octopus.com/downloads) to download the latest version of Octopus Deploy. For more details, please refer to our [blog post announcement from 2020](https://octopus.com/blog/releases-and-lts), when we introduced this release cadence. ### Windows Server Octopus Server can be hosted on **Windows Server 2012 R2 or higher**. We support Octopus Server on the following versions of Windows Server: - Windows Server 2012 R2 - Windows Server 2016 - Windows Server 2019 - Windows Server 2022 - Windows Server 2025 Octopus Server will run on [Windows Server (Core)](https://docs.microsoft.com/en-us/windows-server/administration/server-core/what-is-server-core) without the Desktop experience. However, the easiest installation path is to use "Server with Desktop Experience" which has a GUI and supports running our installation wizard. If you want to use Windows Server Core, you will need to add some missing Windows Features and configure the Octopus Server yourself. Learn about [automating installation](/docs/installation/automating-installation). ### Windows desktop Octopus Server will run on client/desktop versions of Windows, such as Windows 7 and Windows 10. This can be an easy way to trial Octopus Server; however, we do not support Octopus Server for production workloads unless it is hosted on a server operating system. ### Octopus Server in Container From **Octopus 2020.6**, we publish `linux/amd64` Docker images for each Octopus Server release and they are available on [DockerHub](https://hub.docker.com/r/octopusdeploy/). Requirements for the [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) will depend on how you intend to run it. There are some different options to run the Octopus Server Linux Container, which include: - [Octopus Server Container with Docker Compose](/docs/installation/octopus-server-linux-container/docker-compose-linux) - [Octopus Server Container with systemd](/docs/installation/octopus-server-linux-container/systemd-service-definition) - [Octopus Server Container in Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) You can also run the Octopus Server Linux Container using a platform such as [AWS ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html). ### Hypervisors Windows Server can be installed on a bare-metal machine or on a virtual machine (VM) hosted by any popular type-1 hypervisor or virtual private server (cloud) technology. Type-2 hypervisors can work for demos and POCs, but because they are typically installed on desktop operating systems, aren't recommended. :::div{.hint} Octopus Deploy works the exact same on both bare-metal machines or VMs. All it sees is it is running on Windows Server. Of our customers who self-host Octopus Deploy, the vast majority of them use VMs. ::: The list of hypervisors and virtual private servers include (but not limited to): - Type-1 Hypervisors - [VMWare ESXi](https://www.vmware.com/products/esxi-and-esx.html) - [KVM](http://www.linux-kvm.org/page/Main_Page) - [XEN](https://xenproject.org/) - [Hyper-V on Windows Server](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-on-windows-server) - [RHEV](https://www.redhat.com/en/technologies/virtualization/enterprise-virtualization) - Virtual Private Server (cloud) - AWS - Azure - GCP - Oracle - Type-2 Hypervisors - [VMWare Workstation](https://www.vmware.com/products/workstation-pro.html) - [VMWare Fusion](https://www.vmware.com/products/fusion.html) - [VirtualBox](https://www.virtualbox.org/) - [Hyper-V on Windows 10](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/) - [Parallels](https://www.parallels.com/) Most, if not all, of those tools include documentation or pre-built images for Windows Server 2012 R2, 2016 and 2019. Please refer to their documentation on how to install and configure a Windows Server VM. ## SQL Server Database Octopus Deploy requires a Microsoft SQL Server database to store configuration and history. Octopus works with a wide range of versions and editions of SQL Server, from a local SQL Server Express instance, all the way to an Enterprise Edition [SQL Server Failover Cluster](https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/high-availability-solutions-sql-server) or [SQL Server AlwaysOn Availability Group](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server), or even one of the hosted database-as-a-service offerings. Octopus supports versions of SQL Server that have at least 2 years of active support remaining from Microsoft. Versions approaching or past end-of-support are not supported. ### SQL Server hosting options SQL Server can be hosted on [Linux](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-overview) (including in a [container](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-deployment)), [Windows](https://learn.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server), or in one of many managed offerings from Cloud Providers. The requirements are: - Must be running SQL Server 2016+ or Azure SQL - Must be located in the same data center as the servers/container hosts that host Octopus Deploy. Below are some configuration guidelines for various options: - [Self-managed on Linux or Windows](/docs/installation/sql-database/self-managed-sql-server) - [AWS RDS](/docs/installation/sql-database/aws-rds) - [Azure SQL](/docs/installation/sql-database/azure-sql) - [GCP SQL](/docs/installation/sql-database/gcp-cloud-sql) Supported editions: - Express (free) - Web - Datacenter - Standard - Enterprise - Microsoft Azure SQL Database - AWS RDS SQL Database :::div{.warning} **Warning:** Octopus does not support database mirroring or SQL Server replication. Having these features turned on may cause errors during configuration. [More information](/docs/administration/data#high-availability). ::: ### Legacy Octopus version requirements The following table outlines the minimum SQL Server version required by older Octopus Server releases. | Octopus Server | Minimum SQL Server version | Azure SQL | | ----------------- | -------------------------- | --------- | | 2020.2.x ➜ latest | SQL Server 2016+ | Supported | | 3.0 ➜ 2019.13 | SQL Server 2008+ | Supported | ## .NET \{#dotnet-requirements} Octopus Server is a .NET application distributed as a [self-contained deployment](https://docs.microsoft.com/en-us/dotnet/core/deploying/#publish-self-contained) that has all the components required to run, including the .NET runtime. Older versions of Octopus Server require the .NET Framework: - **Octopus 3.4** to **Octopus 2018.4** requires [.NET Framework 4.5.1](https://www.microsoft.com/en-au/download/details.aspx?id=40773) or newer. - **Octopus 2018.5** and later requires [.NET Framework 4.5.2](https://www.microsoft.com/en-au/download/details.aspx?id=42642) or newer and [WMF/PowerShell 5.0](https://www.microsoft.com/en-us/download/details.aspx?id=50395) or newer. - **Octopus 2019.7** and later requires [.NET Framework 4.7.2](https://go.microsoft.com/fwlink/?LinkID=863265) or newer. - **Octopus 2020.1** and later is a fully self-contained distribution bundling the .NET runtime - no .NET Framework or additional runtime is required. ## Windows PowerShell - **Windows PowerShell 3.0** is automatically installed on Windows Server 2012. - **Windows PowerShell 4.0** is automatically installed on Windows Server 2012 R2. - **Windows PowerShell 5.1** is automatically installed on Windows Server 2016 and later. It is recommended and required to run Azure steps. It can installed on Windows Server 2012 and 2012 R2 via the [Windows Management Framework 5.1](https://www.microsoft.com/en-us/download/details.aspx?id=54616). ## PowerShell Core - **PowerShell 7 and later** supports all steps. ## Supported browsers \{#supported-browsers} The Octopus Server includes the Octopus Web Portal user interface and we try to keep this as stable as possible: - **Octopus 3.0** to **Octopus 3.17** supports all our default browsers and Internet Explorer 9+. - **Octopus 4.0** and later supports all our default browsers, and Internet Explorer 11+ (available on Windows 7 and newer, and Windows Server 2008R2 SP1 and newer). - **Octopus 2020.1** and later only supports our default browsers - Internet Explorer 11 is no longer supported. Our default supported browsers are: - Edge - Chrome - Firefox - Safari ## Hardware requirements The size of your Octopus Deploy instance will be dependent on the number of users and concurrent tasks. A task includes (but not limited to): - Deployments - Runbook run - Retention Policies - Health Checks - Let's Encrypt - Process triggers - Process subscriptions - Script console run - Sync built-in package repository - Sync community library step-templates - Tentacle upgrade - Upgrade calamari - Active Directory sync A good starting point is: - Small teams/companies or customers doing a POC with 5-10 concurrent tasks: - 1 Octopus Server: 2 Cores / 4 GB of RAM - SQL Server Express: 2 Cores / 4 GB of RAM or Azure SQL with 25-50 DTUs - Small-Medium companies or customers doing a pilot with 5-20 concurrent tasks: - 1-2 Octopus Servers: 2 Cores / 4 GB of RAM each - SQL Server Standard or Enterprise: 2 Cores / 8 GB of RAM or Azure SQL with 50-100 DTUs - Large companies doing 20+ concurrent tasks: - 2+ Octopus Servers: 4 Cores / 8 GB of RAM each - SQL Server Standard or Enterprise: 4 Cores / 16 GB of RAM or Azure SQL with 200+ DTUs :::div{.hint} These suggestions are a baseline. Monitor your Octopus Server and SQL Server performance on all resources including CPU, memory, disk, and network, and increase resources when needed. ::: If you have a Server or Data Center license you can leverage [Octopus High Availability](/docs/administration/high-availability) to scale out your Octopus Deploy instance. With that option we recommend adding more nodes with 4 cores / 8 GB of RAM instead of increasing resources on one single node. Scaling vertically will only get you so far, at some point you run into underlying host limitations. ## Learn more - [Installation](/docs/installation) - [Compatibility](/docs/support/compatibility) # Use Cases Source: https://octopus.com/docs/octopus-ai/mcp/use-cases.md Below are some typical example use cases that might give you a good starting point to experiment with the Octopus Deploy MCP server. We are eager to hear how you use it and what features you would like to see included in future versions. ## Capability: Change Management Reason about what changes have been deployed where, to help you understand what your customers are using in production. ### Production Version Tracking Quickly find out what version of your software a customer, represented by a Tenant, is running in Production, and identify if there were any issues with their most recent deployment. #### 📝 Example Prompt ```text Customer X have submitted a support ticket complaining that there is a bug in the latest release of App. Can you tell me what release they are on, when it was deployed, and if there were any issues with the deployment? ``` ## Capability: Troubleshooting Get to the root cause of failures or unhealthy deployment targets, allowing you to more quickly recover from failures. ### Deployment Health Analysis Check for failed deployments or unhealthy kubernetes workloads, analyze the failure reasons and suggest solutions. #### 📝 Example Prompt: Deployment Health ```text Check health of the {ServiceName} service in the {SpaceName} space and report any issues found, check status of kubernetes services to produce a comprehensive report ``` #### 💡 Tips for customizing - Prompt for kubernetes status to trigger kubernetes [live object status](/docs/kubernetes/live-object-status) check ## Capability: Administration, Audit, and Compliance Ensure your Octopus instance is in optimal shape, and that deployments continue to execute happily and healthily. ### Certificate Expiry Monitoring Identify unhealthy resources, expiring certificates or find unused projects in your Octopus instance. #### 📝 Example Prompt: Certificate Expiry ```text Find certificates soon set to expire in {SpaceName} space ``` ### Resource Access Validation Find configured resources in your Octopus instance and check if they have access to the desired targets. #### 📝 Example Prompt: Resource Access ```text Check accounts configured in {SpaceName} space in my Octopus instance, find the preproduction azure account and then check which resources are available in that subscription using the azure mcp ``` # octopus account Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account.md Manage accounts in Octopus Deploy ```text Usage: octopus account [command] Available Commands: aws Manage AWS accounts azure Manage Azure subscription accounts azure-oidc Manage Azure OpenID Connect accounts create Create an account delete Delete an account gcp Manage Google Cloud accounts generic-oidc Manage Generic OpenID Connect accounts help Help about any command list List accounts ssh Manage SSH Key Pair accounts token Manage Token accounts username Manage Username/Password accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # JSON formatted output Source: https://octopus.com/docs/octopus-rest-api/octopus-cli/formatted-output.md Most commands support printing the result in JSON format. :::div{.warning} [Dump Deployment](/docs/octopus-rest-api/octopus-cli/dump-deployments/), [Export](/docs/octopus-rest-api/octopus-cli/export/) and [Import](/docs/octopus-rest-api/octopus-cli/import) do not support JSON output. ::: To access JSON formatted output, use the `--outputformat=json` parameter. ```bash octo list-projects --server https://your-octopus-url --apiKey API-YOUR-KEY --outputformat=json ``` This command outputs the list of projects in parsable JSON format: ``` [ { "Id": "Projects-81", "Name": "Phoenix" }, { "Id": "Projects-61", "Name": "OctoFX" }, ] ``` You can also work with the JSON output in PowerShell: ```powershell $json = (./octo list-releases --server https://your-octopus-url --apikey API-YOUR-KEY --project=OctoLifecycle --outputformat=json) | ConvertFrom-Json $json | select -expand Releases | where {[datetime]$_.Assembled -gt ((Get-Date).AddMonths(-1))} ``` This script writes out a list of releases for the last month: ``` Version Assembled PackageVersions ReleaseNotes ------- --------- --------------- ------------ 0.0.16 2018-01-04T14:27:25.221+10:00 Deploy1 0.0.1 0.0.15 2018-01-04T14:14:29.369+10:00 Deploy1 0.0.1 0.0.14 2018-01-04T14:06:55.799+10:00 Deploy1 0.0.1 0.0.13 2018-01-04T14:06:44.784+10:00 Deploy1 0.0.1 0.0.12 2018-01-04T13:44:29.273+10:00 Deploy1 0.0.1 0.0.11 2017-12-18T09:36:44.995+10:00 Deploy 0.0.1 0.0.10 2017-12-18T09:26:22.671+10:00 Deploy 0.0.1 0.0.9 2017-12-18T09:25:02.342+10:00 Deploy 0.0.1 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/octopus-cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Using the Octopus extension Source: https://octopus.com/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension.md :::div{.warning} Version 6 of the Octopus extension for Azure DevOps no longer requires the Octopus CLI to be installed. The use of the **Additional Arguments** field has been deprecated and is only left in place to ease migration from earlier versions. ::: We've created a [public extension](https://marketplace.visualstudio.com/items/octopusdeploy.octopus-deploy-build-release-tasks) you can install into your Azure DevOps instance. This extension makes the following tasks available to your Build and Release processes: - The Octopus Tools installer task. - Pushing your package to Octopus. - Push package build Information to Octopus - Creating a release in Octopus. - Deploying a release to an environment in Octopus. - Promoting a release from one environment to the next. You can also view the status of a project in an environment using the Dashboard Widget. We've open-sourced the [OctoTFS repository in GitHub](https://github.com/OctopusDeploy/OctoTFS) if you'd like to contribute. :::div{.hint} Microsoft has renamed Visual Studio Team Foundation Server (TFS) to Azure DevOps Server with the introduction of Azure DevOps Server 2019. The guidance provided in this document applies to supported versions of TFS. For more information about our support for TFS, see [Azure DevOps and TFS Extension Version Compatibility](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/extension-compatibility). ::: ## Installing the extension You can [install the extension from the marketplace](https://marketplace.visualstudio.com/items/octopusdeploy.octopus-deploy-build-release-tasks) and follow the instructions below. If you're using an earlier version of the formerly named Team Foundation Server, see the [Extension Compatibility documentation](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/extension-compatibility) for details on where to get a compatible extension. After installing the extension, follow the below steps to get it running for your build. ## Use your own version of Octo :::div{.warning} Version 6+ of each of the steps no longer require installing the CLI ::: You can bring your own version of the Octopus CLI and avoid the use of installer tasks or accessing the Internet by [registering octo as a capability](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/install-octopus-cli-capability). ## Add a connection to Octopus Deploy Follow [these](https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints) instructions to create a new service connection and make sure you pick **Octopus Deploy**. Enter a valid [Octopus API Key](/docs/octopus-rest-api/how-to-create-an-api-key) in the **API Key** field and the Octopus Server url. After you've saved the connection, it should be available from the Octopus Deploy Build Tasks. :::div{.hint} if you plan to use the Octopus widgets and want them to function for users other than project collaborators, such as stakeholders, then those users must be explicitly allowed to use the service endpoint. This can be achieved by adding those users to the service endpoint `Users` group. ::: ### Permissions required by the API key The API key you choose needs to have sufficient permissions to perform all the tasks specified by your builds. For the tasks themselves, these are relatively easy to determine (for example, creating a Release for Project A will require release creation permissions for that project). For the Azure DevOps UI elements provided by the extension, the API key must also have the below permissions. If one or more are missing, you should still be able to use the extension, however the UI may encounter failures and require you to type values rather than select them from drop-downs. The dashboard widget will not work at all without its required permissions. If there are scope restrictions (e.g. by Project or Environment) against the account, the UI should still work, but results will be similarly restricted. :::div{.warning} Version 6+ of each of the steps, no longer requires View permissions. ::: - ProjectView (for project drop-downs) - EnvironmentView (for environment drop-downs) - TenantView (for tenant drop-downs) - ProcessView (for channel drop-downs) - DeploymentView (for the dashboard widget) - TaskView (for the dashboard widget) - BuildInformationPush (for pushing build information to Octopus) - BuildInformationAdminister (required if `Overwrite Mode` is set to `Overwrite Existing`) ## Demands and the Octopus tools installer task The Azure DevOps extension tasks require the Octopus CLI to be available on the path when executing on a build agent. Because of this we have created the "Octopus Tools installer" task that will download the Octopus CLI for you, you need to specify the version of the CLI you want to use. Alternative you can have the Octopus CLI [available on the build agent](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/install-octopus-cli-capability). ## Package your application and push to Octopus To integrate with Octopus Deploy, an application must be packaged into either a NuGet, ZIP archive, or tarball, and pushed to Octopus Deploy (or any supported [external feed](/docs/packaging-applications/package-repositories) by Octopus Server). :::div{.hint} We strongly recommend reading the [Build Versions in Azure DevOps](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/build-versions-in-team-build) guide for advice around build and package versions. ::: There are many ways to create such package formats in Azure DevOps, our recommended approach is to use the [Archive Files](http://go.microsoft.com/fwlink/?LinkId=809083) task. ## Add steps to your build or release process :::div{.hint} **Build or Release steps** The following steps can all be added to either your Build or Release process, depending on which you prefer. To add a step to your Build process, edit your Build Definition and click **Add build step**. To add a step to your Release process, edit your Release Definition, select the Environment, and click **Add tasks**. ::: ### Add a package application step :::div{.warning} Version 6+ of the package steps splits the functionality in to **OctopusPackNuGet** and **OctopusPackZip** ::: Add a step to your Build or Release process, our recommended approach is to use the [Archive Files](http://go.microsoft.com/fwlink/?LinkId=809083) task. :::div{.success} **Package versioning** We recommend you read the [Build Versions in Team Build](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/build-versions-in-team-build) document for full details on versioning builds and packages. ::: #### Publish package artifact If your Package Application step is part of your Build process and your Push Packages to Octopus step is part of your Release process, then you will need to add a **Utility ➜ Publish** Artifact step to make the package available to the Release process. ```yaml - task: PublishBuildArtifacts@1 inputs: PathtoPublish: "$(Build.ArtifactStagingDirectory)" ArtifactName: "drop" publishLocation: "Container" ``` ### Add a push package(s) to Octopus step Add a step to your build or release process, search for **Push Packages(s) to Octopus** task. ```yaml - task: OctopusPush@4 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Package: "$(Build.ArtifactStagingDirectory)/packages/*.zip" Replace: "true" ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields (or the [Octopus CLI options](/docs/octopus-rest-api/octopus-cli/push) for more details). ### Add a push package build information to Octopus step :::div{.info} When using build information in release notes in conjunction with [built-in package repository triggers (formerly known as _Automatic Release Creation_)](https://octopus.com/docs/projects/project-triggers/built-in-package-repository-triggers) the build information **must** be pushed to Octopus **before** the packages are pushed to Octopus as the release will be created as soon as the package configured for automatic release create is pushed. ::: Add a step to your Build or Release process, search for **Push Package Build Information to Octopus** task. ```yaml - task: OctopusMetadata@4 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" PackageId: | OctoFX.Database OctoFX.RateService PackageVersion: "$(Build.BuildNumber)" Replace: "false" ``` :::div{.warning} Version 6 of this step has changed name to **OctopusBuildInformation** and the **PackageId** field is called **PackageIds** ::: ```yaml - task: OctopusBuildInformation@6 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" PackageIds: | OctoFX.Database OctoFX.RateService PackageVersion: "$(Build.BuildNumber)" Replace: "false" ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields (or the [Octopus CLI options](/docs/octopus-rest-api/octopus-cli/build-information) for more details). :::div{.hint} There are known compatibility issues with the build link generated by the Octopus extension in some versions of formerly named Team Foundation Server. See our [extension compatibility](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/extension-compatibility/#build-information-compatibility) page for more information. ::: ### Add a create Octopus release step Add a step to your Build or Release process, search for **Create Octopus Release** task. ```yaml - task: OctopusCreateRelease@5 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" ProjectName: "OctoFX" ``` ```yaml - task: OctopusCreateRelease@6 name: create_release inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" ProjectName: "OctoFX" ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields (or the [Octopus CLI options](/docs/octopus-rest-api/octopus-cli/create-release) for more details). ### Add a deploy Octopus release step :::div{.warning} Version 6 of the **Octopus Deploy Release** task has been split into two steps, **OctopusDeployRelease@6** no longer supports deploying to Tenants. For Tenant deployments use **OctopusDeployReleaseTenanted@6** The deployment steps no longer support specifying `latest` for `ReleaseNumber` as this is risky in an automation context and may cause the wrong version to be selected. Instead, the `release_number` output value from the v6 version of your create release step should be used. The deployment steps no longer support waiting on a deployment to complete, see [Await Task](#await-task) to await the result of deployments or runbooks ::: Add a step to your Build or Release process,search for **Deploy Octopus Release** task. ```yaml - task: OctopusDeployRelease@5 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Project: "OctoFX" ReleaseNumber: "latest" Environments: "Test" ``` ```yaml - task: OctopusDeployRelease@6 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Project: "OctoFX" ReleaseNumber: "$(create_release.release_number)" Environment: "Test" ``` ```yaml - task: OctopusDeployReleaseTenanted@6 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Project: "OctoFX" ReleaseNumber: "$(create_release.release_number)" Environment: "Test" DeployForTenants: | Customer A Customer B DeployForTenantTags: | VIP Customers ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields (or the [Octopus CLI options](/docs/octopus-rest-api/octopus-cli/deploy-release) for more details). ## Add a Run Runbook step Add a step to your build or release process, search for **Octopus Run Runbook** task. ```yaml - task: OctopusRunRunbook@6 name: backup-database inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Project: "OctoFX" Runbook: "Backup Database" Environments: "Production" ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields. ## Add a promote Octopus release step Add a step to your build or release process, search for **Promote Octopus Release** task. ```yaml - task: OctopusPromote@4 inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Project: "OctoFX" From: "Test" To: "Production" ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields (or the [Octopus CLI options](/docs/octopus-rest-api/octopus-cli/deploy-release) for more details). ## Add an Await Task \{#await-task} Add a step to your Build or Release process, search for **Octopus Await Task**. ```yaml - task: OctopusAwaitTask@6 name: wait continueOnError: true inputs: OctoConnectedServiceName: "Octopus Server" Space: "Default" Step: deploy-release ``` See the [Extension Marketplace page](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) for a description of the fields. The `Step` property is the name given to the **OctopusDeployRelease@6** or **OctopusRunRunbook@6** step that created the tasks to be awaited. ## Use the dashboard widget On your Azure DevOps dashboard, click the `+` icon to add a new widget, then search for "Octopus Deploy". Add the **Octopus Deploy Status** widget. Hover over the widget and click the wrench icon to configure the widget. Select an Octopus Deploy connection (see the [Add a Connection](#add-a-connection-to-octopus-deploy) section for details), a Project, and an Environment. :::figure ![](/docs/img/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/images/widget-setup-preview.jpg) ::: The widget should refresh to show the current status of the selected project in the selected environment. ![](/docs/img/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/images/multiple-widget-preview.jpg) # Deployment target triggers Source: https://octopus.com/docs/projects/project-triggers/deployment-target-triggers.md Deployment target triggers (also known as auto-deploy) let you define an unattended behavior for your [projects](/docs/projects/) that will cause an automatic deployment of a release into an [environment](/docs/infrastructure/environments). This means, you can configure new deployment targets to be just like their counterparts. Deployment target triggers can help you: - [Elastically scale a farm of servers](/docs/deployments/patterns/elastic-and-transient-environments). - [Automatically keep your deployment targets up to date](/docs/deployments/patterns/elastic-and-transient-environments/keeping-deployment-targets-up-to-date) without needing to perform manual deployments. - [Deploy to transient deployment targets](/docs/deployments/patterns/elastic-and-transient-environments/deploying-to-transient-targets) (targets that are disconnected from time to time). - [Implement immutable infrastructure environments](/docs/deployments/patterns/elastic-and-transient-environments/immutable-infrastructure) (sometimes called "Phoenix Environments"). - Remove deployment targets that have gone offline. For instance, disable a machine in Octopus and have a deployment process that removes disabled machines from your load balancer. On the surface deployment target triggers appear to be simple, however they can grow complex very quickly and we recommend reading our [Elastic and Transient Environments](/docs/deployments/patterns/elastic-and-transient-environments) guide before getting started with your own implementation. ## Defining deployment target triggers Deployment target triggers can be triggered by any machine-related event. A scheduled task runs in Octopus every 30 seconds looking for new events to determine whether any automatic deployment triggers need to fire. Each trigger is inspected to see if the recent stream of events should cause the trigger to fire, and if so, the appropriate deployments will be queued and run for the deployment target(s) that caused the trigger to fire. Events have been placed into the following pre-defined groups: | Event group | Included Events | | ----------- | --------------- | | **Machine events** | Machine cleanup failed, Machine created, Machine deployment-related property modified, Machine disabled, Machine enabled, Machine found healthy, Machine found to be unavailable, Machine found to be unhealthy, Machine found to have warnings | | **Machine critical-events** | Machine cleanup failed, Machine found to be unavailable | | **Machine becomes available for deployment** | Machine enabled, Machine found healthy, Machine found to have warnings | | **Machine is no longer available for deployment** | Machine disabled, Machine found to be unavailable, Machine found to be unhealthy | | **Machine health changed** | Machine found healthy, Machine found to be unavailable, Machine found to be unhealthy, Machine found to have warnings | :::div{.success} For the majority of cases where you want to auto-deploy your project as new deployment targets become available, we advise you use only the **Machine becomes available for deployment** event group. ::: As you define your deployment target triggers, you can select the pre-defined **event groups** or individual **events**: - Machine cleanup failed - Machine created - Machine deployment-related property modified - Machine disabled - Machine enabled - Machine found healthy - Machine found to be unavailable - Machine found to be unhealthy - Machine found to have warnings You can restrict deployments target triggers further by specifying the following: - The environment(s) the trigger applies to. - The target tags the trigger applies to. - The environment and target tags the trigger applies to. ## Add a deployment target trigger 1. From a project, select **Triggers**, then **Add Trigger ➜ Deployment target**. 2. Give the trigger a name. 3. Specify the event group or individual events that will trigger the releases. 4. If you want to limit the trigger to specific environments, select those environments. 5. If you want to limit the trigger to specific target tags, select those target tags. 6. Specify whether or not to re-deploy to deployment targets even if they are already up-to date with the current deployment. 7. Save the trigger. With the trigger save, Octopus will run a scheduled task every 30 seconds looking for events that machine the deployment trigger. ## Unattended release behavior Deployment target triggers let you configure unattended deployment behavior that configures new deployment targets to be just like their counterparts. When a deployment target trigger fires, the following rules are applied: - By default, Octopus will re-run the *currently successful* deployment for the project/environment/tenant combination. You can override this behavior by configuring an [auto deploy override](/docs/octopus-rest-api/octopus-cli/create-autodeployoverride). Note, if multiple identical deployment targets all become available within the same 30 second polling window, they will all be included in the same automatic deployment. This could happen if you scale your web farm up by four nodes, and all four nodes finish provisioning within the same time window. However, this kind of behavior should not be expected or relied on (one or more of the targets might fall outside the 30 second window). - The steps that were run for the *currently successful* deployment will be run for the deployment targets that triggered the deployment. This includes [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals/) and [guided failure](/docs/releases/guided-failures) steps. Note, if you skip steps in a manual deployment, they will be skipped in the subsequent automatic deployment. If you need to run a deployment and skip some steps, there are two ways you can reset the skipped steps: 1. Re-running the entire deployment of the same release again (we generally recommend designing your steps so they can be re-run without negative side-effects). 2. Configuring an [auto deploy override](/docs/octopus-rest-api/octopus-cli/create-autodeployoverride) for the same release to the same environment/tenant (this will result in a new deployment being generated without the manually skipped steps). - If a deployment of a release fails, Octopus will continue deploying the last successful deployment. This ensures auto-deployments will continue, even if a release has been updated and failed. ## The order of deployment target triggers Because projects are considered independent in Octopus, and there is no built-in way to define dependencies between projects or control the order in which projects are deployed. :::div{.success} We generally recommend catering for application dependencies in the applications themselves, rather than pushing that responsibility to your deployments. This practice will reduce friction between your applications allowing you to reliably deploy your applications independently of each other. ::: One workaround for this is to create a project in Octopus with the job of orchestrating the deployment of multiple projects. In this case you could: 1. Create a project that orchestrates the deployment of multiple projects. 2. Each step in the deployment process of this project could call the Octopus API to deploy the next project in the dependency chain, waiting for a successful deployment before continuing to the next project. 3. Optionally create an deployment target trigger in the orchestrating project to start the whole process. :::div{.success} The [Chain Deployment](https://library.octopus.com/step-template/actiontemplate-chain-deployment) step template might be a perfect fit for you in this situation, or you may want to customize this step template for more advanced scenarios. ::: ### Specifying a specific release to be deployed If you need to specify a specific release, either because the release hasn't been deployed yet, or Octopus is calculating the wrong release for a particular situation, you can configure an [auto deploy override](/docs/octopus-rest-api/octopus-cli/create-autodeployoverride/) to override the default automatic deployment behavior. This is useful for scenarios like [immutable infrastructure](/docs/deployments/patterns/elastic-and-transient-environments/immutable-infrastructure/), [deploying to transient targets](/docs/deployments/patterns/elastic-and-transient-environments/deploying-to-transient-targets), and force deployment target triggers to use a specific release for a specific environment/tenant. ## Deployment target trigger subscription notifications If you want to be notified of automatic deployments events, like blockages or failures, you can configure [subscriptions](/docs/administration/managing-infrastructure/subscriptions) to notify you by email or use web hooks to create your own notification channels. You can even use web hooks to code your own recovery behavior based on your specific situation. ## Troubleshooting deployment target triggers There are a number of reasons why automatic deployments may not work the way you expected, some of which we've already discussed earlier. Here are some troubleshooting steps you can take to figure out what is going wrong. ### Is the dashboard green? Octopus will attempt to automatically deploy the current releases for the environments that are appropriate for a machine. The current release is the one that was most recently *successfully* deployed as shown on the [project dashboard](/docs/projects/project-dashboard). - Octopus will not automatically deploy a release if the deployment for that release was not successful (this can be a failed deployment or even a canceled deployment) You will need you to complete a successful deployment again before auto-deployments can continue for the given release, or configure an [auto deploy override](/docs/octopus-rest-api/octopus-cli/create-autodeployoverride). ### Investigate the diagnostic logs Go to **Configuration ➜ Diagnostics ➜ Auto deploy logs**. The **verbose** logs usually contain the reason why a project trigger didn't take any action. For example: ``` Auto-deploy: Machine 'Local' does not need to run release '2.6.6' for project 'My Project' and tenant '' because it already exists on the machine or is pending deployment. ``` :::div{.info} Diagnostics are only available on [self-hosted Octopus](/docs/getting-started#self-hosted-octopus) instances. ::: ### Investigate the audit messages The deployment triggers are all triggered based on events occurring in Octopus, all of which are logged reliably as audit events. Go to **Configuration ➜ Audit** and filter down to see the events related to your deployments. # Cross-Site Request Forgery (CSRF) and Octopus Deploy Source: https://octopus.com/docs/security/cve/csrf-and-octopus-deploy.md We take every reasonable effort to make Octopus Deploy secure against known vulnerabilities, mainly related to browser vulnerabilities. One such browser vulnerability is Cross-Site Request Forgery (CSRF). ## What is CSRF? Using a CSRF attack a malicious actor could potentially simulate requests to the Octopus Server on behalf of an authenticated user. For more information about CSRF refer to the following: - [Cross-Site Request Forgery according to OWASP](https://owasp.org/www-community/attacks/csrf) ## Does Octopus Deploy prevent CSRF attacks? Yes. The Octopus HTTP API is protected from CSRF attacks out of the box by requiring an anti-forgery token using a combination of the [Synchronizer Token Pattern](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#synchronizer-token-pattern) and the [Encrypted Token Pattern](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#encryption-based-token-pattern). If you are using any tools provided by Octopus Deploy, including the Web Portal, and Client SDK, this is all done for you automatically and transparently. ### Headers, cookies, and HttpOnly Upon authenticating your client, Octopus Server sends two cookies back to the browser via HTTP Headers: - ​`OctopusIdentificationToken` with `HttpOnly=true` - this is an encrypted session token which can be used by the Octopus Server to identify the authenticated user who is making the HTTP Request. - `Octopus-Csrf-Token` with `HttpOnly=false` - this cookie contains the encrypted synchronizer token which should be sent as a special header as part of every authenticated HTTP request. The anti-forgery cookie is configured as `HttpOnly=false` because the Octopus JavaScript client requires access to the cookie in order for the Octopus Server to detect and prevent CSRF attacks. After authenticating, your HTTP request should contain these headers to prove the identity and origin of the HTTP request: - Cookies header containing the ​`OctopusIdentificationToken` - `X-Octopus-Csrf-Token header` - this header contains the encrypted synchronizer token which proves the request is legitimate If you are using any tools provided by Octopus Deploy, including the Web Portal, and Client SDK, this is all done for you automatically and transparently. ## Troubleshooting If certain HTTP requests are not correctly formed you may see an error message like this either in your web browser or when trying to use the Octopus REST API directly: `A required anti-forgery token was not supplied or was invalid.` Octopus also logs a warning like this to your Octopus Server logs: `It looks like we just prevented a cross-site request forgery (CSRF) attempt on your Octopus Server: The required anti-forgery token was not supplied or was invalid.` ### Using the Octopus Web Portal If you see this kind of error message when using the Octopus Web Portal in your browser, please try the following steps: 1. Refresh the Octopus Web Portal in your browser (to make sure you have the latest JavaScript). 1. Sign out of the Octopus Web Portal. 1. Sign back in to the Octopus Web Portal. 1. If this doesn't work, please try clearing the cookies from your browser and trying again. 1. After signing in, you should see two cookies from the Octopus Server - the authentication cookie and the anti-forgery cookie. See the next section on [troubleshooting cookie problems](#cookies). 1. If this doesn't work please get [ask us for help](#support) - see below. #### Troubleshooting problems with cookies {#cookies} Octopus requires two cookies when using a web browser: the authentication cookie and the anti-forgery cookie. Check in your browser and make sure both cookies are available. Either one of these cookies can be missing for quite a number of reasons: 1. Your web browser does not support cookies. Configure your browser to accept cookies from your Octopus Server. You may need to ask your systems administrator for help with this. 1. The time is incorrect on your computer, or the time is incorrect on the Octopus Server. This can cause your authentication cookies to expire and become unusable. Correct the time and configure your computers to automatically synchronize their time from a time server. 1. You are using Chrome and have not configured your Octopus Server to use HTTPS. Chrome has started to consider web sites served over `http://` as unsafe and will refuse to accept cookies from those unsafe sites. [Configure your Octopus Server to use HTTPS](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https) instead of HTTP. [Learn more about Chrome and the move toward a more secure web](https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html). 1. You have a network device between your browser and your Octopus Server which is stripping cookies it doesn't trust, or is modifying cookies and setting `HttpOnly=true`. The anti-forgery cookie is configured as `HttpOnly=false` because the Octopus JavaScript client requires access to the cookie. Some firewalls or proxies can be configured to strip or modify cookies like this in the HTTP response headers. You should configure your network device to allow this cookie through to the browser without being removed nor modified. 1. You are hosting Octopus Server on the same domain as other applications. One of the other applications may be issuing a malformed cookie causing the Octopus authentication cookies to be misinterpreted. Move Octopus Server to a different domain to isolate it from the other applications, or stop the other applications from issuing malformed cookies. See [this GitHub Issue](https://github.com/OctopusDeploy/Issues/issues/2343) for more details. ### Using the Octopus REST API with raw HTTP If you use raw HTTP to access Octopus Deploy we recommend using an [API Key](/docs/octopus-rest-api/how-to-create-an-api-key/) to authenticate your requests. Learn about the [Octopus REST API](/docs/octopus-rest-api) including [authenticating with the Octopus REST API](/docs/octopus-rest-api/#authentication). ### Contact Octopus support {#support} If none of these troubleshooting steps work, please get in contact with our [support team](https://octopus.com/support) and send along the following details (feel free to ignore points if they don't apply): a. Which browser and version are you using? (Help > About in your browser is the best place to get this information). b. Does the same thing happen with other browsers, like Internet Explorer, Google Chrome, Firefox? c. Does the same thing happen for other people/users? d. Does the same thing happen when you access Octopus Deploy over another network, like from home or over your cellular network? e. Does the same thing happen if you use InPrivate/Incognito mode in your browser? f. Does the same thing happen after clearing all browser data for the Octopus Server (including cookies, history, local data, stored credentials)? g. Have you used any other versions of Octopus Deploy in this browser before? h. Do you have other web applications hosted on the same server? i. Do you have other web applications hosted on the same domain? (for example: `octopus.mycompany.com` and `myapp.mycompany.com`?) j. Do you have any intermediary network devices (like proxies or web application firewalls) which may be stripping custom HTTP headers or cookies from your requests? k. Please [record the problem occurring in your web browser](/docs/support/record-a-problem-with-your-browser) and send the recording to us for analysis. Please record the following steps: Sign out of Octopus Deploy, sign back in again, and then try to do the action that fails. # SHA1 "Shattered" collision and Octopus Deploy Source: https://octopus.com/docs/security/cve/shattered-and-octopus-deploy.md :::figure ![Shattered logo](/docs/img/security/cve/shattered-logo.png) ::: _Extracted from our [blog post in 2017](https://octopus.com/blog/shattered)._ In 2017 [Google announced](https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html) an attack that makes it [practically possible to generate SHA1 hash collisions](http://shattered.io/). The risk seems to focus on areas where certificates are used for digital signatures, not encryption. So far we haven't seen any clear reports that this applies to SSL/TLS - our understanding is that there's a risk someone could make a fake certificate and it could be "trusted" as if it were a real one, but not that SSL/TLS data could be decrypted. Of course, I'm no expert! Either way, SHA1 has been on the way out for some time, and certificate authorities stopped issuing SHA1 certificates quite some time ago. This is just another nail in the coffin for SHA1. ## Octopus and Tentacle prior to 3.14 use SHA1 by default When you install Octopus and the Tentacle agent, they both generate X.509 certificates that are used to encrypt the connection between them (via TLS). When we generated these self-signed certificates in versions before 3.14, **we used SHA1**. This is the default setting of the [certificate generation function](https://msdn.microsoft.com/en-us/library/windows/desktop/aa376039(v=vs.85).aspx) that we call in the Windows API, and not something we ever thought to change. ## Mitigation **Starting out with Octopus?** Install Octopus and Tentacle 3.14 or newer if you are starting from scratch so SHA256 certificates will be generated by default. **Already have Octopus up and running and you are concerned about the SHA1 certificates**, you can upgrade to Octopus and Tentacle 3.14 or newer, and use the command-line interface to generate new SHA256 certificates, then tell Octopus and Tentacle to use/trust those instead. For details, check our documentation page on [how to use custom certificates with Octopus and Tentacle](/docs/security/octopus-tentacle-communication/custom-certificates-with-octopus-server-and-tentacle). ## Other things you should check You'll want to check whether SHA1 is being used in other places. Common examples for Octopus users might include: - The certificate used for the Octopus web frontend if you use HTTPS. Normally this is something people provide themselves. - Certificates used for authenticating with third party services, like Azure management certificates. - Certificates used to provide HTTPS for websites that you deploy. ## Detecting SHA1 certificates with PowerShell Given an `X509Certificate2` object, here's a PowerShell function that checks whether it uses SHA1: ```powershell function Test-CertificateIsSha1{ [CmdletBinding()] param( [Parameter( Position=0, Mandatory=$true, ValueFromPipeline=$true) ] [System.Security.Cryptography.X509Certificates.X509Certificate2[]]$Certificate ) process { foreach($cert in $Certificate) { $algorithm = $cert.SignatureAlgorithm.FriendlyName $isSha1 = $algorithm.Contains("sha1") Write-Output $isSha1 } } } ``` Here's a PowerShell script that you can use to check whether a website is using a SHA1 certificate. Kudos to Jason Stangroome for the initial implementation of [Get-RemoteSSLCertificate](https://gist.github.com/jstangroome/5945820): ```powershell function Get-RemoteSSLCertificate { [CmdletBinding()] param ( [Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)] [System.Uri[]] $URI ) process { foreach ($u in $URI) { $Certificate = $null $TcpClient = New-Object -TypeName System.Net.Sockets.TcpClient try { $TcpClient.Connect($u.Host, $u.Port) $TcpStream = $TcpClient.GetStream() $Callback = { param($sender, $cert, $chain, $errors) return $true } $SslStream = New-Object -TypeName System.Net.Security.SslStream -ArgumentList @($TcpStream, $true, $Callback) try { $SslStream.AuthenticateAsClient('') $Certificate = $SslStream.RemoteCertificate } finally { $SslStream.Dispose() } } finally { $TcpClient.Dispose() } if ($Certificate) { if ($Certificate -isnot [System.Security.Cryptography.X509Certificates.X509Certificate2]) { $Certificate = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList $Certificate } Write-Output $Certificate } } } } $sites = @("https://www.yoursite.com", "https://anothersite.com") $sites | ForEach-Object { $site = $_ $cert = Get-RemoteSSLCertificate -Uri $site if (Test-CertificateIsSha1 -Certificate $cert) { Write-Warning "Site: $site uses SHA1" } } ``` And here's a script that checks whether your IIS server is using any SHA1 certificates: ```powershell Import-Module WebAdministration foreach ($site in Get-ChildItem IIS:\Sites) { foreach ($binding in $site.bindings.Collection) { if ($binding.protocol -eq "https") { $hash = $binding.CertificateHash $store = $binding.certificateStoreName $certs = Get-ChildItem "Cert:\LocalMachine\$store\$hash" foreach ($cert in $certs) { if (Test-CertificateIsSha1 -Certificate $cert) { Write-Warning "Site: $site.Name uses SHA1" } } } } } ``` You can easily run this in the [Octopus Script Console](/docs/administration/managing-infrastructure/script-console) across all of your machines: ![Running the IIS SHA1 binding detection in the Octopus script console](/docs/img/security/cve/shattered-console.png) # Spectre (Speculative Execution Side-Channel Vulnerabilities), Meltdown, and Octopus Deploy Source: https://octopus.com/docs/security/cve/spectre-meltdown-and-octopus-deploy.md In January 2018 [Google announced](https://googleprojectzero.blogspot.com.au/2018/01/reading-privileged-memory-with-side.html) an attack that makes it practically possible to leak information from kernel memory on the host operating system. > We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts. So far, there are three known variants of the issue: - Variant 1: bounds check bypass ([CVE-2017-5753](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5753)). - Variant 2: branch target injection ([CVE-2017-5715](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5715)). - Variant 3: rogue data cache load ([CVE-2017-5754](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5754)). ## Impact on Octopus Deploy Octopus Deploy is not directly affected by these vulnerabilities. However, since the host operating system and underlying hardware can be vulnerable, any application running on affected systems can be affected. For your Octopus installation, this would include servers hosting: - Octopus Server. - Microsoft SQL Server which is hosting your Octopus database. - The targets of your deployments. ## Mitigation The mitigation for these vulnerabilities are all related to the host operating system and underlying hardware. There is no specific mitigation for Octopus Deploy. For Microsoft operating systems follow these security advisories to ensure your host operating system and underlying hardware are protected against these vulnerabilities: - [Guidance to mitigate speculative execution side-channel vulnerabilities](https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV180002) - [Windows Client Guidance for IT Pros to protect against speculative execution side-channel vulnerabilities](https://support.microsoft.com/en-au/help/4073119/protect-against-speculative-execution-side-channel-vulnerabilities-in) For software and hardware from all other vendors, please follow the mitigation in each CVE report listed above. # Cross-Site Scripting (XSS) and Octopus Deploy Source: https://octopus.com/docs/security/cve/xss-and-octopus-deploy.md We take every reasonable effort to make Octopus Deploy secure against well-known vulnerabilities. One such vulnerability is Cross-Site Scripting (XSS) in web browsers. ## What is XSS? Using an XSS attack a malicious actor could potentially trick a user's web browser into executing unintended code. For more information about XSS refer to the following: - [Cross-Site Scripting according to OWASP](https://owasp.org/www-community/attacks/xss/) - [Cross-Site Scripting Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html) ## Does Octopus Deploy prevent XSS attacks? Yes. The Octopus Server, its HTTP API, and browser-based user interface are designed to actively prevent a wide variety of XSS attacks, including stored XSS, reflected XSS, and DOM based XSS. The responsibility for protecting against XSS attacks is shared between us as the software vendor, and you as the customer. ### What is Octopus Deploy responsible for? We take responsibility to provide commercially reasonable protection against XSS attacks. If you [find and report](https://octopus.com/security/disclosure) a specific XSS vulnerability, we will follow our practice of [responsible disclosure](https://octopus.com/security/disclosure). ### What is the customer responsible for? As the customer, you are responsible for granting access to people you trust, and ensuring the security of your own network and operating systems. XSS is a browser-based vulnerability. The Octopus Server and its browser application work on your behalf to actively prevent XSS attacks, and the browser itself can also help in the prevention of XSS attacks. To this end, **Octopus 4.0** (and newer) has increased its minimum supported browser requirements to further improve the security of your Octopus installation. Learn about [supported web browsers](/docs/installation/requirements/#supported-browsers). ## How Octopus Deploy prevents XSS attacks {#prevention} The only perfect way to prevent every possible XSS attack would be if Octopus Deploy didn't use a web browser. We have taken great care to design Octopus Deploy to strike a balance between enabling our customers to do what Octopus Deploy was designed to do, and preventing harm via XSS. At the time of writing, Octopus Deploy actively follows these XSS prevention rules from the [OWASP XSS (Cross Site Scripting) Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html): - RULE #0 - Never Insert Untrusted Data Except in Allowed Locations. - RULE #1 - HTML Escape Before Inserting Untrusted Data into HTML Element Content. - RULE #2 - Attribute Escape Before Inserting Untrusted Data into HTML Common Attributes. - RULE #3 - JavaScript Escape Before Inserting Untrusted Data into JavaScript Data Values. - RULE #3.1 - HTML escape JSON values in an HTML context and read the data with JSON.parse. - RULE #4 - CSS Escape And Strictly Validate Before Inserting Untrusted Data into HTML Style Property Values. - RULE #5 - URL Escape Before Inserting Untrusted Data into HTML URL Parameter Values. - RULE #6 - Sanitize HTML Markup with a Library Designed for the Job. - RULE #7 - Prevent DOM-based XSS. - Bonus Rule #1: Use `HTTPOnly` cookie flag. - Bonus Rule #2: Implement Content Security Policy. - Bonus Rule #3: Use an Auto-Escaping Template System. - Bonus Rule #4: Use the X-XSS-Protection Response Header. ### Content is sanitized by default The Octopus Deploy web user interface is built using modern web frameworks, where the default behavior is to sanitize data before it is added to the DOM. - **Octopus 2.0** to **Octopus 3.17** is built using [AngularJS](https://angularjs.org/) which employs [strict contextual escaping by default](https://docs.angularjs.org/api/ng/service/$sce). - **Octopus 4.0**+ is built using [React](https://reactjs.org/) which employs a similar technique. This doesn't make our web user interface perfect, but it is easier for our developers be safe-by-default. If we really want to render unsanitized content to the browser DOM, we have to explicitly opt-out of the safe-by-default behavior, and mitigate the security risks via other means. ### Markdown is preferred over HTML Some places in Octopus Deploy allow a user to add rich content, like descriptions for most things. In these cases the supported format is [markdown](http://commonmark.org/) instead of HTML. ### Session cookie is protected from malicious scripts When you sign in to the Octopus Deploy web user interface, the server will send back an encrypted cookie called `OctopusIdentificationToken` in the response header with the `HttpOnly=true` cookie flag set. Even if an attacker could successfully execute a malicious script, the browser will prevent that script from accessing the session cookie. In the worst case where an attacker could steal the session cookie, Octopus Deploy actively prevents against [Cross-Site Request Forgery](/docs/security/cve/csrf-and-octopus-deploy). ### A strict content security policy (CSP) is configured By default, your Octopus Server implements a strict Content Security Policy (CSP). This policy is configured to limit exposure to XSS when using a browser which supports CSP. You can see the `Content-Security-Policy` of your Octopus Server by inspecting any of the HTTP responses sent to your browser. Learn about [HTTP security headers used by Octopus Deploy](/docs/security/http-security-headers). ### Built-in XSS filters are enforced in modern browsers The Octopus Server forces modern browsers to enable their built-in XSS filters, even if these filters were disabled by the user, by adding the `X-XSS-Protection` header to every HTTP response. Learn about [HTTP security headers used by Octopus Deploy](/docs/security/http-security-headers). ## Frequently asked questions {#faq} These are some questions we get asked frequently by security conscious customers. ### Why isn't the content of API responses HTML encoded? Octopus Deploy is built "API-first". The web browser is not the only client of the Octopus Server. People frequently use command-line tools or scripts to automated the Octopus Server's HTTP API: it doesn't make sense to HTML-encode API responses when the client is not an HTML client. The Octopus Server performs content-negotiation, and will encode the response appropriately for the content-type supported by the client: - If the response is sent as `text/html` the data in the response will be HTML-encoded. - If the response is sent as `text/json` and the data will not be HTML-encoded. ### Which web browsers are supported by Octopus Deploy? The quick answer is "all modern browsers". Learn more about [supported browsers](/docs/installation/requirements). # ZipBombs and Octopus Deploy Source: https://octopus.com/docs/security/cve/zipbombs-and-octopus-deploy.md We take every reasonable effort to make Octopus Deploy secure against well-known vulnerabilities. One such vulnerability is denial-of-service attacks via ZipBombs. ## What is a ZipBomb? Using a specially crafted archive file (eg .zip, .jar, .tar.gz), a malicious actor could potentially cause a machine to become unresponsive by consuming large amounts of CPU and disk space. For more information about Zip Bombs, refer to the following: - [ZipBombs according to OWASP](https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/10-Business_Logic_Testing/09-Test_Upload_of_Malicious_Files#zip-bombs) - [ZipBombs according to Security Researcher, David Fifield](https://www.bamsoftware.com/hacks/zipbomb/) ## Does Octopus Deploy prevent ZipBomb attacks? Yes. The Octopus Server, and deployment agents that run on Workers and Deployment Targets, are designed to detect and prevent extraction of potentially malicious archives. The responsibility for protecting against ZipBomb attacks is shared between us as the software vendor, and you as the customer. ### What is Octopus Deploy responsible for? We take responsibility to provide commercially reasonable protection against ZipBomb attacks. If you [find and report](https://octopus.com/security/disclosure) a specific ZipBomb vulnerability, we will follow our practice of [responsible disclosure](https://octopus.com/security/disclosure). We will assist Octopus Cloud customers in adjusting their instance limits if [the default limits](#cloud-limits) are impacting their ability to deploy legitimate packages. ### What is the customer responsible for? As the customer, you are responsible for granting access to people you trust, and ensuring the security of your own network and operating systems. Octopus Server ensures that archives can only be uploaded for processing by authenticated users with specific permissions, which means that the only potential attackers are users whose access to the Octopus instance has been specifically granted by the customer. For self-hosted Octopus Deploy instances, the customer is responsible for determining an appropriate risk position and adjusting [the available configuration settings](#self-hosted-limits) if the defaults are not appropriate to their deployment requirements. ## How Octopus Deploy prevents ZipBomb attacks {#prevention} The only perfect way to prevent every possible ZipBomb attack would be if Octopus Deploy didn't extract archives uploaded by users. As deployment packages are almost always archives, this is obviously not a viable solution. We have taken great care to design Octopus Deploy to strike a balance between enabling our customers to do what Octopus Deploy was designed to do, and preventing harm via ZipBombs. ### Octopus Cloud default archive limits {#cloud-limits} The following archive limits are in place for all Octopus Cloud customers, which generally align to the available resources on the Octopus Cloud infrastructure: * Maximum size a deployment package can decompress to: 1 terabyte * Applies to all Deployment Targets, Dynamic Workers and self-hosted Worker Pools * Maximum size an archive can decompress to on Octopus Server for all other operations: 10 gigabytes * Applies to any other non-deployment operations that use archives These limits can be adjusted on a per-customer basis. If your standard business operations are being impacted by these limits, please contact our [support team](https://octopus.com/support) and we'll be happy to help adjust your limits to find the appropriate balance of functionality and protection. ### Self-Hosted default archive limits {#self-hosted-limits} The following default archive limits are in place for all self-hosted: * Maximum size a deployment package can decompress to: 1 petabyte * Applies to all Deployment Targets, the in-built Worker (the "Run on Octopus Server" option available for some steps), and self-hosted Worker Pools * Maximum size an archive can decompress to on Octopus Server for all other operations: 1 terabyte * Applies to any other non-deployment operations that use archives These limits can be adjusted by an Octopus Server Administrator, via the Configuration > Settings > Archive Limits page in Octopus Server. # How to use custom certificates with Octopus Server and Tentacle Source: https://octopus.com/docs/security/octopus-tentacle-communication/custom-certificates-with-octopus-server-and-tentacle.md :::div{.info} **Custom certificates are only supported on Self-Hosted instances of Octopus Server** ::: Octopus uses self-signed certificates to securely communicate between Tentacles and the server. However, if you have a requirement to use your own certificates, you can use the import-certificate command to import your own certificate. Octopus Server and Tentacle supports the import-certificate command. The import command supports importing certificates in the Personal Information Exchange (PFX) files with an optional password. Octopus requires PFX files contain the certificate private key. For more information on self-signed certificates, see the [blog post](https://octopus.com/blog/why-self-signed-certificates) on the topic. :::div{.warning} **Updating an existing Octopus Server or Tentacle** It's important to consider the impact of updating an existing Octopus Server or Tentacle as changes are required to ensure each component trusts the other. Read the information below carefully. ::: ## Configuring Octopus Server to use custom certificates This assumes you have already installed Octopus on the target server. 1. Stop the OctopusDeploy service on the target Octopus Server you wish to update. 2. Optionally export the current certificate as a backup, by executing the following statement at a command line on the same server. ```batch Octopus.Server.exe export-certificate --export-pfx="C:\PathToCertificate\cert.pfx" --pfx-password="Password" --console ``` 3. Execute the following statement at a command line on the same server. Note that the password is optional. ```batch Octopus.Server.exe import-certificate --from-file="C:\PathToCertificate\cert.pfx" --pfx-password="Password" --console ``` This should display something like the following. ```batch Octopus Deploy: Server version 3.14.x Importing the certificate stored in PFX file in C:\PathToCertificate\cert.pfx using the provided password... The certificate CN=OctopusServer was regenerated; old thumbprint = F1D30DE16AFBA30CB8FD20070856EECC15DDF06C, new thumbprint = 1EA1B432478117393C8BA435FD42727C0E87445C Certificate imported successfully. ``` :::div{.hint} **Letting the Server regenerate its own certificate** If you have come from an earlier version of Octopus with a shorter security key, or just want the Server to use a new certificate without having to generate one yourself, you can follow these steps in this section but substitute the command in step 2, with the following ```batch Octopus.Server.exe new-certificate --export-pfx="C:\PathToCertificate\cert.pfx" --pfx-password="Password" --console ``` The command will then return ```batch Octopus Deploy: Server version 3.14.x Generating certificate... The Octopus Server currently uses a certificate with thumbprint: C7524763110D271520C15B6A50296200DA6DDCAA A new certificate has been generated with thumbprint: CF3B5562510A2DFCB95909878C1ADD7CCE50FE2B The new certificate has been written to C:\PathToCertificate\cert.pfx. ``` Then import the new certificate (see step 2 above). ::: 3. Restart the OctopusDeploy service. 4. The next step is to update all the associated Tentacles to trust the new certificate. This is done by stopping the Tentacle service you wish to update and then executing the following statement with the new thumbprint from the step above. Finally, restart the Tentacle service. ```powershell Tentacle.exe configure --trust NewOctopusServerCertificateThumbprint --console ``` ## Configuring Tentacle to use custom certificates This assumes you have already installed a Tentacle on the target server. 1. Stop the Tentacle service on the target server you wish to update. 2. Execute the following statement at a command line on the same server. ```batch tentacle.exe import-certificate --from-file="C:\PathToCertificate\cert.pfx" --pfx-password="Password" --console ``` This should display something like the following. ```batch Octopus Deploy: Tentacle version 3.14.x Importing the certificate stored in PFX file in C:\PathToCertificate\cert.pfx using the provided password... Certificate with thumbprint DE010ABF6FF8ED1B7895A31F005B8D88A3329867 imported successfully. ``` :::div{.hint} **Letting the Tentacle regenerate its own certificate** If you have come from an earlier version of Octopus with a shorter security key, or just want the Tentacle to use a new certificate without having to generate one yourself, you can follow these steps in this section but substitute the command in step 2, with the following ```batch tentacle.exe new-certificate --export-pfx="C:\PathToCertificate\cert.pfx" --pfx-password="Password" --console ``` The command will then return ```batch Octopus Deploy: Tentacle version 3.2.x A new certificate has been generated and written to C:\PathToCertificate\cert.pfx. Thumbprint: DE010ABF6FF8ED1B7895A31F005B8D88A3329867 ``` Import the new certificate as above. ::: 3. Restart the Tentacle service. 4. Execute the following command to display the updated thumbprint. ```batch Tentacle.exe show-thumbprint ``` This should display something like the following. ```batch Octopus Deploy: Tentacle version 3.12.x The thumbprint of this Tentacle is: DE010ABF6FF8ED1B7895A31F005B8D88A3329867 ``` 5. Open the Octopus Web Portal and select to the Tentacle on the Environments Page. 6. Update the Tentacle thumbprint to use the value from Step 4 above and click the save button. 7. Select the Connectivity tab and then click Check health to verify the connection is working. If it's not, double-check the Octopus Server and Tentacle thumbprints to ensure they are correct. # Script integrity in Octopus Deploy Source: https://octopus.com/docs/security/script-integrity.md Octopus supports a wide variety of scripting languages and runtimes. Octopus executes your scripts as provided using the language and runtime best suited to the script in the host operating environment. The content of these scripts can come from a wide variety of sources, including: - Built-in steps provided by Octopus. - [Step templates contributed by the community and curated by Octopus](/docs/projects/community-step-templates). - Your own [custom scripts](/docs/deployments/custom-scripts). - Your own [custom script modules](/docs/deployments/custom-scripts/script-modules). - Your own [custom step templates](/docs/projects/custom-step-templates). Octopus lets you tailor these scripts to your needs at runtime using features like [dynamically substituting variables into your script files](/docs/projects/steps/configuration-features/substitute-variables-in-templates). Once this is done Octopus will inject your script into a "bootstrapper" enabling friction free interaction with important Octopus features like [variables](/docs/projects/variables/), [output variables](/docs/projects/variables/output-variables/), and [artifacts](/docs/projects/deployment-process/artifacts). ## Script integrity in Octopus Octopus does not actively enforce script integrity. - PowerShell is the only common scripting language which supports script integrity verification, but your scripts are [executed using -ExecutionPolicy Unrestricted](https://github.com/OctopusDeploy/Calamari/blob/b23ea09bd17a49fd2b0c9bae588ef1012db4f8c2/source/Calamari.Shared/Integration/Scripting/WindowsPowerShell/PowerShellBootstrapper.cs#L71). - PowerShell Execution Policies alone are not a foolproof security solution, and can be thwarted quite simply: PowerShell will happily execute a script with malicious content as long as it meets the requirements of the Execution Policy. - Octopus provides a lot of value to you by modifying your scripts on your behalf, which invalidates the signature of the original script. - Octopus could dynamically re-sign the resulting script after modification, but this introduces extra complexity and additional security concerns for very little gain: the signed script has a very short life span. ### More on PowerShell execution policies and Octopus Customers who use PowerShell will typically become aware of [Execution Policies](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies) early on. One question we are asked from time is how to make Octopus work in environments where the PowerShell Execution Policy is forced to `Restricted`, `AllSigned`, or `RemoteSigned`. The short answer is: you cannot. Requiring PowerShell scripts to be signed means: 1. Your server needs to trust the code-signing certificate used to sign the script. 2. The script can only be executed if it remains unchanged after it was signed. When it comes to a trusted certificate chain this is a solvable problem. Think back to the different sources your scripts can come from: - We could sign all our built-in scripts with our own code-signing certificate, and offer you a link to download the public certificate. - We could sign any scripts we curate into the community library in a similar way. - You could force your team to sign any custom scripts they author using your own trusted code-signing certificate. Let's consider the next step: to provide value Octopus will modify the original script which invalidates the original signature... but what if Octopus could dynamically re-sign the modified script with a trusted certificate. We could make this work, but it does introduce additional complexity and security concerns: 1. You could provide Octopus with the public and private key pair for a code-signing certificate which is trusted by your servers. 2. Octopus would push that code-signing certificate onto the server where the script will be modified and executed, which introduces a security concern: anyone with access to that server could sign a script containing malicious content. At this point we do not see the genuine value proposition in supporting a feature like this. ## Recommendations Set your PowerShell Execution Policy for general user accounts to one of the more restrictive options, but allow the Octopus user account to use the `Unrestricted` security policy. If this approach doesn't feel like it will work, and script integrity is a real problem for you please [get in touch with us](https://octopus.com/support). We would be very happy to help find an acceptable solution for your specific situation. In case you want to read further and consider other options for securing your Octopus installation: - Octopus provides a robust and detailed security model allowing you to control who has access to certain projects, environments, tenants, and ultimately which people can author the scripts which are executed by Octopus on your behalf. Learn about [managing users and teams](/docs/security/users-and-teams). - Most Octopus customers use source control systems to track changes to all their scripts, using a trusted chain of authority to embed those scripts into the packages which are used by Octopus. - Octopus provides detailed auditing enabling post-emptive analysis of a person's activity including custom scripts authored as part of your deployment process. Learn about [auditing](/docs/security/users-and-teams/auditing). # Resetting passwords Source: https://octopus.com/docs/security/users-and-teams/resetting-passwords.md ## Resetting your own password In the Octopus Web UI, click your username in the top right corner of the screen. Select **Profile** to go to your profile page. To change your password, select **Change password**: :::figure ![](/docs/img/security/users-and-teams/images/resetpassword.png) ::: Enter and confirm your new password, then click **Save**: :::figure ![](/docs/img/security/users-and-teams/images/newpassword.png) ::: ## Resetting user passwords Octopus Server administrators can reset the passwords of other users from the Octopus Web Portal at **Configuration ➜ Users**. Select the user whose password you want to change: :::figure ![](/docs/img/security/users-and-teams/images/usersearch.png) ::: Click **Change password**: :::figure ![](/docs/img/security/users-and-teams/images/changeuserpwd.png) ::: Enter and confirm the new password, then click **Save**: :::figure ![](/docs/img/security/users-and-teams/images/userpasswordchange.png) ::: ## Resetting administrator passwords Users can be made administrators, and new administrator accounts created using the command line on the Octopus Server machine. To reset the password of an administrator, or to make a user into an administrator, open an administrative command prompt on the Octopus Server and run the following commands. ### For username/password authentication ```powershell Octopus.Server.exe service --stop Octopus.Server.exe admin --username=YOUR-USERNAME --password=YOUR-PASSWORD Octopus.Server.exe service --start ``` Replace `YOUR-USERNAME` with the simple login name of the administrator account, and provide the **new password**. ### For Active Directory authentication When Active Directory authentication is in use, the `--password` argument is not required: ```powershell Octopus.Server.exe service --stop Octopus.Server.exe admin --username=YOUR-USERNAME Octopus.Server.exe service --start ``` ## Password complexity Passwords in Octopus must meet password complexity rules. Octopus applies a scoring system to a new password to decide if it meets the complexity rules. A password must be: - Minimum 8 characters long It also needs to meet 3 (or more) of the following scoring criteria: - Contains a number - Contains whitespace - Contains an uppercase letter - Contains a lowercase letter - Contains punctuation or symbols - At least 12 characters long - At least 16 characters long The more scoring criteria a new password meets, the higher its score and derived complexity. # Security and unscoped variables Source: https://octopus.com/docs/security/users-and-teams/security-and-un-scoped-variables.md Sometimes, a team will be granted contributor access to a project, but be restricted in the environments that it can access. By default, Octopus's security system will then prevent members of the team editing [variables](/docs/projects/variables) that apply outside of their allowed environments. During development this can be inconvenient, as variables frequently need to be added in support of new application features. ## Why restrict editing of unscoped variables? {#why-restrict-editing-of-unscoped-variables} Octopus uses variables to control almost every aspect of the deployment process. This provides a great deal of flexibility - for example, variables can control: - The packages that are deployed. - Which deployment conventions are applied. - Which script files will be run. - The content of custom scripts. - The location of deployed apps, and other paths on the target machines. The default permissions applied to unscoped variable editing are restrictive because, although the release details screen shows the values of included variables, it can be hard for a user performing a deployment to verify that the variable contents applied to the environment are appropriate. This default behavior can be changed by granting an additional permission to the team. ## Granting unscoped variable editing permission {#granting-unscoped-variable-editing-permission} As an administrator, open **Configuration ➜ User Roles**. In the list of user roles shown, either create a new role to assign to the team, or select a built-in role like **Project contributors** to modify. :::figure ![](/docs/img/security/users-and-teams/images/3277947.png) ::: The individual permissions that make up the role will then be shown. Tick the **VariableEditUnscoped**or **VariableViewUnscoped** items as required, and save the role. ![](/docs/img/security/users-and-teams/images/3277946.png) # Debug problems with Octopus variables Source: https://octopus.com/docs/support/debug-problems-with-octopus-variables.md Sometimes a variable used during deployment may have a different value from the one you expect. Here are the first steps to debugging these issues. Project variables and [library variable set](/docs/projects/variables/library-variable-sets) variables are captured in a snapshot when you create a release or publish a runbook snapshot. If you update a variable after the snapshot is taken, the change won't apply to existing releases or snapshots. To pick up the new value, you'll need to either create a new release (or update the existing release's variable snapshot) for deployments, or create and publish a new runbook snapshot for runbooks. ## Check the variable snapshot for the release {#check-variable-snapshot-for-release} 1. Open the **Project ➜ Releases ➜ Release** page for the Release you are debugging. 2. Scroll down to find the **Variables** section and click the **Show Snapshot** link so see the snapshot of Variables being used by this Release. 3. If the variable is wrong in the Snapshot: - Update the Variable in the **Variables** section of the project, and then. - Click the **Update variables** button - beware this will update **all** variables in the Snapshot to the latest values. :::figure ![Release variable snapshot showing the Update Variables button](/docs/img/support/images/3278466.png) ::: :::div{.hint} **Tenant variables are the exception.** Unlike project and library variable set variables, [tenant variables](/docs/tenants/tenant-variables) are not included in the snapshot. They take effect immediately without needing a new release or snapshot. For more details, see [Tenant variables and snapshots](/docs/tenants/tenant-variables#tenant-variables-and-snapshots). ::: ## Check the variable value in the all variables tab {#check-variable-value-in-all-variables-tab} 1. Open the **Project ➜ All Variables** tab. 2. Investigate the variables from all possible sources for the project including the project itself, [variable sets](/docs/projects/variables/library-variable-sets/), and [tenants](/docs/tenants). :::figure ![All Variables tab showing variables from all sources](/docs/img/support/images/5865680.png) ::: :::div{.success} Did you know you can sort filter all variables grids? Click **Show Advanced filters** and select your filter type. ::: ## Write the variables to the deployment log {#write-variables-to-deployment-log} This will log the variables available at the beginning of each step in the deployment as Verbose messages. 1. Open the **Project ➜ Variables** page. 2. Set the following two variables: | Name | Value | | --- | --- | | OctopusPrintVariables | True | | OctopusPrintEvaluatedVariables | True | It should look like this. You can have as many extra variables as you want besides these two. :::figure ![Project variables with OctopusPrintVariables and OctopusPrintEvaluatedVariables set to True](/docs/img/support/images/evaluatedvars.png) ::: 1. **Create a new release** of the project or **Update the variable snapshot** for the release as shown above. 1. Deploy the new release. 1. Enable **Verbose** output on the **Task log** page. 1. Expand the element corresponding to the Tentacle on which the problem is observed. Two sets of variables will be printed, first, the raw definitions before any substitutions have been performed, then the result of evaluating all variables for deployment. :::div{.warning} **For debugging only** When adding these variables to your project, Octopus will add the following warning to your deployment log `20:30:45 Warning | OctopusPrintVariables is enabled. This should only be used for debugging problems with variables, and then disabled again for normal deployments.` This is because printing variables increases the size of the task logs, and can make your deployments run slower. Don't forget to turn this off when you're finished debugging. These variables are false by default. ::: ## Use debug mode {#debug-mode} As an alternative to setting `OctopusPrintVariables` and `OctopusPrintEvaluatedVariables`, you can enable **Debug Mode** when creating a deployment or running a runbook. Debug mode writes detailed variable information to the task log without requiring you to add variables to your project. - For **project deployments**, you'll find the Debug Mode option on the deployment creation screen. - For **runbooks**, click **Show advanced** on the run screen to reveal the Debug Mode option. Debug mode is a convenient way to get variable diagnostics for a single run without modifying your project's variables. # Get the raw output from a deployment process Source: https://octopus.com/docs/support/get-the-raw-output-from-a-deployment-process.md When you contact Octopus Deploy support with a deployment related issue, we'll sometimes ask you to send the full deployment process, so that we can understand what went wrong. This page explains how to capture this information. 1. Navigate to the deployment process screen. ![](/docs/img/support/images/deploymentprocess.png) 2. Click `Download as JSON` from the ... overflow menu ![](/docs/img/support/images/deploymentprocessjson.png) Send this file to us, or attach it to your support request. :::div{.hint} You might want to open the file in a text editor, and redact any sensitive information like hostnames or company information, before sending the data to us. ::: # Log files Source: https://octopus.com/docs/support/log-files.md Octopus Server and Tentacles write diagnostic log messages to their local Windows filesystem. The files are rolled periodically to avoid consuming excessive space. :::div{.success} **Recent Errors** The most recent warnings and errors can be view on the **Configuration ➜ Diagnostics** page ::: ## Finding the log files for Octopus Server and Tentacle {#Logfiles-Findingthelogfiles} When Octopus applications are installed, a "home directory" is chosen - this is usually `C:\Octopus`. Octopus stores its logs in the `Logs` subdirectory. Three sets of log files may be present: `OctopusServer.txt`, `OctopusTentacle.txt`. Older versions of these files will be stored with numeric suffixes in their names, e.g. the most recent archived server log file will be in `OctopusServer.0.txt`. When requesting support, send as much log information as possible - the repetitive nature of the files means they usually zip down well. ## Changing log retention {#Logfiles-Changinglogretention} To increase the number of log files Octopus will store, find the `octopus.server.exe.nlog` file associated with the application. This is usually in a subfolder of the Octopus "Program Files" folder. **Take a backup** of the file before making changes. The retention of the logs is controlled by the `maxArchiveFiles` property, it defaults to 7 and can be increased or decreased. The Octopus process will automatically switch to the new logging level as soon as the file is saved. :::div{.warning} **Updates reset the nlog file** When you use the Octopus installer to update the version of Octopus the `octopus.server.exe.nlog` will be reset to the default values that ship with Octopus. ::: ## Changing log levels for Octopus Server {#Logfiles-Changingloglevels} Occasionally it may be necessary to change the logging level of an Octopus application. First, ensure the environment variable `OCTOPUS__Logging__File__LogEventLevel` is set to `Verbose` or any other desired log level. :::div{.warning} **A restart of Octopus Server is required** A server restart is required in order to apply the changes to environment variables. ::: Then, find the `octopus.server.exe.nlog` file associated with the application. This is usually in a subfolder of the Octopus "Program Files" folder. **Take a backup** of the file before making changes. The verbosity of file logging is controlled in the `octopus-log-file` section: ```xml ``` The `minlevel` attribute is most useful for configuring the logging level. Change this value to `Trace` to gather more information. The Octopus process will automatically switch to the new logging level as soon as the file is saved. :::div{.warning} **Don't forget to reset your changes** Leaving your `minlevel` too low will impact the performance of Octopus Server. We recommend resetting back to the default logging configuration once you have completed your diagnostics session. ::: ## Customizing log format {#Logfiles-Customizinglogformat} The format of log entries is controlled by NLog layout variables in the `octopus.server.exe.nlog` file. The default layout is: ```xml ``` This produces log entries in the format: ```text 2024-01-15 10:30:45.1234 12345 67890 INFO Your log message here ``` The layout components are: - `${longdate}` - Timestamp in `yyyy-MM-dd HH:mm:ss.ffff` format - `${processid}` - The process ID - `${threadid}` - The thread ID - `${level}` - Log level (Trace, Debug, Info, Warn, Error, Fatal) - `${message}` - The log message - `${exception}` - Exception details when present You can customize the layout by modifying the `normalLayout` variable. ### Custom date formats with timezone {#Logfiles-Customdateformats} The default `${longdate}` renderer does not include timezone information. To include the timezone offset in your timestamps, replace `${longdate}` with a custom `${date}` format: ```xml ``` This produces timestamps like: ```text 2024-01-15 10:30:45.1234 +10:00 12345 67890 INFO Your log message here ``` Common date format specifiers: - `zzz` - UTC offset with hours and minutes (e.g., `+10:00`, `-05:00`) - `zz` - UTC offset with hours only (e.g., `+10`, `-05`) - `K` - Timezone information in ISO 8601 format For UTC timestamps instead of local time, use: ```xml ${date:universalTime=true:format=yyyy-MM-dd HH\:mm\:ss.ffff}Z ``` :::div{.hint} **Note:** Colons in date format strings must be escaped with a backslash (`\:`) because colons are used as parameter delimiters in NLog layout syntax. ::: For example, to include the logger name: ```xml ``` For a full list of available layout renderers, see the [NLog documentation](https://nlog-project.org/config/?tab=layout-renderers). ### Preserving custom configuration across upgrades {#Logfiles-Preservingcustomconfiguration} The default `octopus.server.exe.nlog` file is overwritten when Octopus Server is upgraded. To preserve your customizations: 1. Create a copy of `octopus.server.exe.nlog` in the same directory 2. Rename the copy to `Octopus.Server.exe.user.nlog` 3. Make your changes to the `user.nlog` file 4. Restart Octopus Server When a `user.nlog` file exists, the server loads it instead of the default configuration. The installer will not overwrite this file during upgrades. :::div{.warning} **Keep your custom config in sync** If you use a custom `user.nlog` file, be aware that future Octopus versions may make changes to the default NLog configuration. After upgrading, compare your custom file with the new default to ensure compatibility. ::: ## Changing log levels for Halibut {#Logfiles-Changingloglevelshalibut} To change the logging level for Halibut as logged in the Octopus Server, we follow a similar process as described above with a few changes. First, ensure the environment variable `OCTOPUS__Logging__File__LogEventLevel` is set to `Verbose` or any other desired log level. Next, change the minimum Halibut log level value by setting the environment variable `OCTOPUS__Logging__Context__Halibut__LogEventLevel` to `Verbose`. This change ensures all logs from Halibut will be processed by Octopus Server. :::div{.warning} **A restart of Octopus Server is required** A server restart is required in order to apply the changes to environment variables. ::: Then, find the `octopus.server.exe.nlog` file associated with the application. This is usually in a subfolder of the Octopus "Program Files" folder. **Take a backup** of the file before making changes. The verbosity of file logging is controlled in the `octopus-log-file` section: ```xml ``` The `minlevel` attribute is most useful for configuring the logging level. Change this value to `Trace` to gather more information. The Octopus process will automatically switch to the new logging level as soon as the file is saved. :::div{.warning} **Don't forget to reset your changes** Leaving your `minlevel` too low will impact the performance of Octopus Server. We recommend resetting back to the default logging configuration once you have completed your diagnostics session. ::: ## Changing log levels for Tentacle {#Logfiles-Changingloglevelstentacle} Occasionally it may be necessary to change the logging level of a Tentacle instance. First, find the `tentacle.exe.nlog` file associated with the application. This is usually in a subfolder of the Octopus/Tentacle "Program Files" folder. **Take a backup** of the file before making changes. The verbosity of file logging is controlled in the `octopus-log-file` section: ```xml ``` The `minlevel` attribute is most useful for configuring the logging level. Change this value to `Trace` to gather more information. The Tentacle process will automatically switch to the new logging level as soon as the file is saved. :::div{.warning} **Don't forget to reset your changes** Leaving your `minlevel` too low will impact the performance of Octopus Server. We recommend resetting back to the default logging configuration once you have completed your diagnostics session. ::: # Support screen capture Source: https://octopus.com/docs/support/support-video-capture.md When you contact Octopus Deploy support with an issue, we'll sometimes ask you to use our screen capture app to display the issue, so that we can understand what went wrong. This page explains how to capture this information. If you have a mic and want to narrate the issue during the screen capture, you can give the application permission to capture audio. Details on the app and the data usage: 1. Recordings sent to Octopus will be used for diagnostic purposes only and will not be shared externally from Octopus in any way. 2. All recordings are sent to an encrypted and secure file storage. 3. All recordings are deleted 30 days from the time of upload via retention policy and can be manually deleted on request. 4. Only the Octopus support team has direct access to the recording files. The files may be shared with other internal teams if required during an issue escalation. Browsers may behave differently when granting permissions and capturing. If the review box is blank, then neither audio or video was captured. :::div{.warning} Redaction tools are not currently provided. Please watch the provided preview for any sensitive information before uploading! ::: :::div{.hint} Please let us know if your internal use policy prohibits the Octopus support team from using this tool for diagnostic purposes in your organization. ::: # Upgrading minor and patch releases of Octopus Deploy Source: https://octopus.com/docs/administration/upgrading/guide/upgrading-minor-and-patch-releases.md A minor release of Octopus Deploy is when the second number in the version is incremented. For example, 2025.1.x to 2025.2.x. A patch release is when the third number is incremented, from 2025.2.1 to 2025.2.2. ## System Integrity Check Before performing any upgrade steps, we highly recommend performing a [System Integrity Check](/docs/administration/managing-infrastructure/diagnostics) on your live instance database. This is so we can check that the Database Schema is in the expected condition for the upgrade. If the integrity check passes, you are good to start the upgrade process. If it fails, please contact [support](https://octopus.com/support) with the [raw output of the task](/docs/support/get-the-raw-output-from-a-task), and we can get that fixed for you. ## Prep Work Before starting the upgrade, it is critical to back up the master key and license key. If anything goes wrong, you might need these keys to do a restore. It is better to have the backup and not need it than need the backup and not have it. The master key doesn't change, while your license key changes, at most, once a year. Back them up once to a secure location and move on to the standard upgrade process. 1. Backup the Master Key. 1. Backup the License Key. ### Backup the Octopus Master Key Octopus Deploy uses the Master Key to encrypt and decrypt sensitive values in the Octopus Deploy database. The Master Key is securely stored on the server, not in the database. If the VM hosting Octopus Deploy is somehow destroyed or deleted, the Master Key goes with it. To view the Master Key, you will need login permissions on the server hosting Octopus Deploy. Once logged in, open up the Octopus Manager and click the view master key button on the left menu. :::figure ![](/docs/img/shared-content/upgrade/images/view-master-key.png) ::: Save the Master Key to a secure location, such as a password manager or a secret manager. An alternative means of accessing the Master Key is to run the `Octopus.Server.exe show-master-key` from the command line. Please note: you will need to be running as an administrator to do that. :::figure ![](/docs/img/shared-content/upgrade/images/master-key-command-prompt.png) ::: ### Backup the License Key Like the Master Key, the License Key is necessary to restore an existing Octopus Deploy instance. You can access the License Key by going to **Configuration ➜ License**. If you cannot access your License Key, please contact our [support team](https://octopus.com/support) and they can help you recover it. ## Standard upgrade process The standard upgrade process is an in-place upgrade. In-place upgrades update the binaries in the install directory and update the database. The guide below includes additional steps to backup key components to make it easier to rollback in the unlikely event of a failure. ### Overview The steps for this are: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Backup the database. 1. Do an in-place upgrade. 1. Test the upgraded instance. 1. Disable maintenance mode. ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Upgrade High Availability In general, upgrading a high available instance of Octopus Deploy follows the same steps as a typical in-place upgrade. Download the latest MSI and install that. The key difference is to upgrade only one node first, as this will upgrade the database, then upgrade all the remaining nodes. :::div{.warning} Attempting to upgrade all nodes at the same time will most likely lead to deadlocks in the database. ::: The process should look something like this: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Stop all the nodes. 1. Backup the database. 1. Select one node to upgrade, wait until finished. 1. Upgrade all remaining nodes. 1. Start all remaining stopped nodes. 1. Test upgraded instance. 1. Disable maintenance mode. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ## Rollback failed upgrade While unlikely, an upgrade may fail. It could fail on a database upgrade script, SQL Server version is no longer supported, license check validation, or plain old bad luck. When that happens, it is time to rollback to a previous version. Minor and patch releases are generally the easiest of the scenarios to rollback. The process will be: 1. Restore the database backup. 1. Download and install the previously installed version of Octopus Deploy. 1. Do some sanity checks. 1. If maintenance mode is enabled, disable it. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Find and download the previous version of Octopus Deploy Octopus Deploy stores the installation history in the database. Run this query on your Octopus Deploy database if unsure as to which version to download: ```sql SELECT TOP 5 [Version] FROM [dbo].[OctopusServerInstallationHistory] ORDER BY Installed desc ``` When you know the version to install, go to the [previous downloads page](https://octopus.com/downloads/previous). ### Installing the previous version The key configuration items, such as connection string, files, instance information, etc., are not stored in the install directory of Octopus Deploy. To install the previous version, first, uninstall Octopus Deploy. Uninstalling will only delete items from the install directory, or `C:\Program Files\Octopus Deploy\Octopus`. Then run the MSI to install the previous version. ## Recommendation - creating a test instance The chance of an in-place upgrade failing is low. However, there is still that chance. There might be a new feature or a breaking change introduced. We recommend creating a sandbox or test instance to test out new versions of Octopus Deploy. Learn more about [creating a test instance](/docs/administration/upgrading/guide/creating-test-instance). # Upgrading from Octopus 1.6 to 2.6.5 Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-1.6-2.6.5.md :::div{.success} Please read our guide for [upgrading older versions of Octopus](/docs/administration/upgrading/legacy) before continuing. ::: A **lot** changed between **Octopus 1.6** and **Octopus 2.0**; so much that we had to to handle upgrades differently to the way we handle upgrades from, say, **Octopus 1.5** to **Octopus 1.6**. This page will walk you through the process of upgrading an **Octopus 1.6** instance to **Octopus 2.0**. Rather than being an in-place upgrade, **Octopus 2.0** is designed to be a **side-by-side** upgrade. ## Preparing :::div{.problem} If your **Octopus 1.x** installation is at an earlier version than **Octopus 1.6**, please [upgrade it to Octopus 1.6](https://octopus.com/downloads/previous) before proceeding. ::: Below is the dashboard from an **Octopus 1.6** server that will be used as an example for this walk-through. :::figure ![](/docs/img/administration/upgrading/legacy/images/3278001.png) ::: Before attempting to migrate, make sure that you don't have any projects, environments, or machines with duplicated names (this is no longer allowed in **Octopus 2.0**, and the migration wizard will report an error if it finds duplicates). Then go to the **Storage** tab in the **Configuration** area, and make sure that you have a recent backup: :::figure ![](/docs/img/administration/upgrading/legacy/images/3277999.png) ::: ## Install Octopus 2.0 Next, install **Octopus 2.0**, either on the same server as your current **Octopus 1.6** server, or on a new server (ideal). **Octopus 2.0** uses different paths, ports and service names to **Octopus 1.0** so there should not be any conflicts between them. :::div{.hint} View our [guide to installing an Octopus 2.0](/docs/installation), which includes a video walk-through. ::: ## Importing On the **Octopus 2.0** server, open the Octopus Manager from your start menu/start screen. :::figure ![](/docs/img/administration/upgrading/legacy/images/3277998.png) ::: In the Octopus Manager, click **Import from 1.6...** :::figure ![](/docs/img/administration/upgrading/legacy/images/3277997.png) ::: When the wizard appears, select the backup file from **Octopus 1.6** that you created earlier Next, you'll be asked if you want to change the Tentacle port on all machines that get imported. For more information on why you might like to do this, see the section on upgrading Tentacles below. :::div{.success} If you don't change the Tentacle port, make sure you completely shut down your **Octopus 1.6** server after the upgrade, or remove the upgraded machines from it. Leaving the **Octopus 1.6** server running will generate large numbers of invalid connection attempts from the old server to the new Tentacles, and this can adversely affect performance. ::: :::figure ![](/docs/img/administration/upgrading/legacy/images/3277995.png) ::: Next, click Import and your **Octopus 1.6** backup will be imported. :::figure ![](/docs/img/administration/upgrading/legacy/images/3277994.png) ::: The import process will take a few minutes to run, and any errors will be reported in the output window. :::figure ![](/docs/img/administration/upgrading/legacy/images/3277993.png) ::: At this point, you should be able to view the imported projects, environments and machines, but all the machines will be offline. :::figure ![](/docs/img/administration/upgrading/legacy/images/3277992.png) ::: ## Permissions The **Octopus 2.x** migrator will not import permission settings from **Octopus 1.6**, due to changes made between the permission models. After you upgrade to **Octopus 2.x**, you will need to configure [Teams](/docs/security/users-and-teams) to assign permissions. ## Upgrading Tentacles **Octopus 2.x** changed the communication stack between Octopus and Tentacle, meaning that your **Octopus 2.x** server can no longer communicate with **Tentacle 1.6**. So in addition to upgrading Octopus, you'll also need to upgrade any Tentacles. The following PowerShell script can be used to download the latest Tentacle MSI, install it, import the X.509 certificate used for **Tentacle 1.6**, and configure it in listening mode. ```powershell function Uninstall-OldTentacle { Write-Output "Uninstalling the 1.0 Tentacle" $app = Get-WmiObject -Query "SELECT * FROM Win32_Product WHERE Name = 'Octopus Deploy Tentacle' AND Version < 2.0" $app.Uninstall() & sc.exe delete "Octopus Tentacle" } function Upgrade-Tentacle ($rel, $loc, $hm, $sthumb, $sxsPort) { Write-Output "Beginning Tentacle installation" Write-Output "Downloading Octopus Tentacle MSI..." $downloader = new-object System.Net.WebClient $downloader.DownloadFile("http://download.octopus.com/octopus/Octopus.Tentacle.$rel.msi", [System.IO.Path]::GetFullPath(".\Tentacle.msi")) Write-Output "Installing MSI" $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i Tentacle.msi /quiet" -Wait -PassThru).ExitCode Write-Output "Tentacle MSI installer returned exit code $msiExitCode" if ($msiExitCode -ne 0) { throw "Installation aborted" } Write-Output "Configuring the 2.0 Tentacle" cd "$loc" & .\tentacle.exe create-instance --instance "Tentacle" --config "$hm\Tentacle\Tentacle.config" --console & .\tentacle.exe import-certificate --instance "Tentacle" --from-registry --console & .\tentacle.exe new-certificate --instance "Tentacle" --if-blank --console & .\tentacle.exe configure --instance "Tentacle" --home "$hm" --console & .\tentacle.exe configure --instance "Tentacle" --app "$hm\Applications" --console & .\tentacle.exe configure --instance "Tentacle" --trust="$sthumb" if ($sxsPort) { & .\tentacle.exe configure --instance "Tentacle" --port "$sxsPort" --console } if (!$sxsPort) { Write-Output "Stopping the 1.0 Tentacle" Stop-Service "Octopus Tentacle" } Write-Output "Starting the 2.0 Tentacle" & .\tentacle.exe service --instance "Tentacle" --install --start --console if (!$sxsPort) { Uninstall-OldTentacle } Write-Output "Tentacle commands complete" } # If sxsPort ('side-by-side port') is specified, the old Tentacle will remain running # alongside the new one. If an sxsPort is not specified, the old Tentacle will be # uninstalled. Upgrade-Tentacle ` -rel "2.0.13.1100-x64" ` -loc "${env:ProgramFiles}\Octopus Deploy\Tentacle" ` -hm "${env:SystemDrive}\Octopus" ` -sthumb "*** ENTER OCTOPUS THUMBPRINT HERE ***" ` -sxsPort "10934" ``` *(Many thanks to James Crowley for his improvements to this script.)* # Upgrade with a new server instance Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/upgrade-with-a-new-server-instance.md This is the recommended way of performing an upgrade for larger installations. It gives you the opportunity to verify that your Tentacles have been successfully upgraded, and allows you to more easily roll back if you have any issues. Be sure to read the [Upgrading from Octopus 2.6.5 to 2018.10 LTS](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts) documentation page. You must have a working **Octopus 2.6.5** installation for the data migration. ## Step by step To upgrade to a modern version of Octopus Server, follow these steps: ### 1. Back up your Octopus 2.6.5 database and Master Key See the [Backup and restore](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/backup-2.6) page for instructions on backing up your database. ### 2. Install Octopus 2018.10 LTS on a new virtual or physical server :::div{.success} **Upgrade to the latest version** When upgrading to **Octopus 2018.10 LTS** please use the latest version available. We have been constantly improving the **Octopus 2.6.5** to **Octopus 2018.10 LTS** data migration process while adding new features and fixing bugs. ::: See the [Installing Octopus 2018.10 LTS](/docs/installation) page for instructions on installing a new **Octopus 2018.10 LTS** instance. ### 3. Migrate your data from 2.6.5 to 2018.10 LTS \{#migrate-data-265-2018-10-lts} See the [Migrating data from Octopus 2.6.5 to 2018.10 LTS](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/migrating-data-from-octopus-2.6.5-2018.10lts) page for instructions on importing your **Octopus 2.6.5** database backup into **Octopus 2018.10 LTS**. :::div{.hint} **Migration taking a long time?** By default, we migrate everything from your backup including historical data. You can use the `maxage=` argument when executing the migrator to limit the number of days to keep. For example: `maxage=90` will keep 90 days of historical data ignoring anything older. To see the command syntax click the **Show script** link in the wizard ::: :::div{.hint} **Using the built-in Octopus NuGet repository?** If you use the built-in [Octopus NuGet repository](/docs/packaging-applications/package-repositories) you will need to move the files from your **Octopus 2.6.5** server to your **Octopus 2018.10 LTS** server. They are not part of the backup. In a standard **Octopus 2.6.5** install the files can be found under `C:\Octopus\OctopusServer\Repository\Packages` You will need to transfer them to the new server to `C:\Octopus\Packages`. Once the files have been copied, go to **Library ➜ Packages ➜ Package Indexing** and click the `RE-INDEX NOW` button. This process runs in the background, so if you have a lot of packages it could take a while (5-20 mins) to show in the UI or be usable for deployments. ::: ### 4. Use Hydra to automatically upgrade your Tentacles Hydra is a tool we've built that will help you update your Tentacles to the latest version. It is particularly useful migrating from 2.6.5 to 2018.10 LTS as the communication methods have changed. :::div{.problem} This is the point of no return. When your Tentacles are upgraded to 3.x your 2.6.5 server will not be able to communicate with them. We strongly recommend testing Hydra against a small subset of "canary" machines before upgrading the rest of your machines. The best way to do this is: 1. Create a new "canary" machine role and assign it to a few machines. 2. Set the Update Octopus Tentacle step to run on machines with the "canary" role. 3. Once you are confident the Tentacle upgrade works as expected, you can use Hydra to upgrade all remaining machines. ::: #### How does Hydra work? Hydra consists of two parts: 1. A package that contains the latest Tentacle MSI installers. 2. An **Octopus 2.6.5** step template that does the upgrade to your environments. To account for issues with communicating with a Tentacle that has been 'cut off' from its Octopus Server, the Hydra process connects to the Tentacle and creates a scheduled task on the Tentacle Machine. If it is able to schedule the task it considers that install a success. The task runs one minute later. The scheduled task does the following: 1. Find Tentacle services. 2. Stop all Tentacles (if they're running). 3. Run MSI. 4. Update configs for any polling Tentacles. 5. Starts any Tentacles that were running when we started. With just one Tentacle service this should be a very quick process, but we cannot estimate how long it may take with many Tentacle services running on the one machine. #### Common problems using Hydra The scheduled task is set to run as `SYSTEM` to ensure the MSI installation will succeed. If your Tentacles are running with restricted permissions, they may not be able to create this scheduled task. **The only option is to upgrade your Tentacles manually.** Hydra performs a Reinstall of each Tentacle. As part of the reinstall, the Service Account is reset to `Local System`. If you need your Tentacles to run under a different account, you will have to make the change after the upgrade completes (after you've re-established a connection from 2018.10 LTS). You can do this manually, or using the following script: ```powershell Tentacle.exe service --instance "Tentacle" --reconfigure --username=DOMAIN\ACCOUNT --password=your-password --start --console ``` #### Let's upgrade these Tentacles! To use Hydra, follow these steps: :::div{.hint} These steps should be executed from your **Octopus 2.6.5** server to your 2.6 Tentacles. ::: 1. Download the latest Hydra NuGet package from [https://octopus.com/downloads/latest/Hydra](https://octopus.com/downloads/latest/Hydra). 2. Use the Upload Package feature of the library to upload the OctopusDeploy. Hydra package to the built-in NuGet repository on your **Octopus 2.6.5** server. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278019.png) ::: 3. Import the [Hydra step template](https://library.octopus.com/step-templates/d4fb1945-f0a8-4de4-9045-8441e14057fa/actiontemplate-hydra-update-octopus-tentacle) from the Community Library. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278018.png) ::: 4. Create a [new project](/docs/projects) with a single "Update Octopus Tentacle" step from the step template. 1. Ensure you choose or create a [Lifecycle](/docs/releases/lifecycles) that allows you to deploy to all Tentacles. 2. Ensure you set the Update Octopus Tentacle step to run for all appropriate Tentacles. 3. Set the `Server Mapping` field: - If you only use listening Tentacles you can leave the `Server Mapping` field blank. - If you are using any polling Tentacles, add the new **Octopus 2018.10 LTS** server address (including the polling TCP port) in the Server Mapping field. See below for examples. :::div{.hint} **Server mapping for Polling Tentacles** It is very important you get this value correct. An incorrect value will result in a polling Tentacle that can't be contacted by neither a 2.6.5 or 2018.10 LTS server. Several different scenarios are supported: 1. A single Polling Tentacle instance on a machine pointing to a single Octopus Server **the most common case**: - Just point to the new server's polling address `https://newserver:newport` like `https://octopus3.mycompany.com:10934` 2. Multiple Polling Tentacle instances on the same machine pointing to a single Octopus Server: - Just point to the new server's polling address `https://newserver:newport` like `https://octopus3.mycompany.com:10934` and Hydra will automatically update all Tentacles to point to the new server's address 3. Multiple Polling Tentacle instances on the same machine pointing to different Octopus Servers **a very rare case**: - Use this syntax to tell Hydra the mapping from your old Octopus Server to your new Octopus Server: `https://oldserver:oldport=>https://newserver:newport,https://oldserver2:oldport2/=>https://newserver2:newport2` where each pair is separated by commas. This will match the first case and replace it => with the second case. Click the ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278017.png) help button for more detailed instructions. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278014.png) ::: ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278015.png) ::: 5. Create a release and deploy. The deployment should succeed, and one minute later the Tentacles will be upgraded. ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278010.png) ### 5. Verify connectivity between the 2018.10 LTS server and your Tentacles When the Hydra task runs on a Tentacle machine, it should no longer be able to communicate with the **Octopus 2.6.5** server. You can verify this by navigating to the Environments page and clicking **Check Health**. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278012.png) ::: After successfully updating your Tentacles, you should see this check fail from your **Octopus 2.6.5** server. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278011.png) ::: Performing the Check Health on your **Octopus 2018.10 LTS** server should now succeed. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278009.png) ::: :::div{.hint} If you have multiple Tentacles running on the same server, an update to one will result in an update to **all** of them. This is because there is only one copy of the Tentacle binaries, even with multiple instances configured. ::: ### 6. Decommission your Octopus 2.6.5 Server Once you are confident your Tentacles have all been updated and work correctly, you can decommission your **Octopus 2.6.5** Server. # Octopus enterprise patterns Source: https://octopus.com/docs/best-practices/platform-engineering/enterprise-patterns.md If platform engineering is a general concept that applies to many tools and processes, then the Octopus enterprise patterns represent the implementation of platform engineering with Octopus. :::div{.hint} In programming terms, platform engineering is the interface, and the enterprise patterns are the classes implementing the interface. ::: The enterprise patterns emerged because supporting software deployments and maintaining applications in large enterprise environments isn't as simple as configuring a single, shared Octopus instance that everyone can use. There are common, practical constraints that require multiple Octopus spaces and instances. These include: - Network latency between geographically distributed teams - The desire for business units to control their own infrastructure and processes - Business acquisitions that bring established DevOps system - Compliance with standards like PCI - Platform engineering teams that require the ability to deploy spaces and projects in much the same way DevOps teams deploy applications This sections describes the enterprise patterns and notes how you can use them to address common scenarios in enterprise environments. ## Independent space per business unit/application ![Separate Spaces diagram](/docs/img/platform-engineering/separate-spaces.png) The most common pattern is to partition a single Octopus installation into [separate spaces](https://octopus.com/blog/best-practices-spaces). Octopus is fairly agnostic about what individual spaces represent, but it's common to provide a space for business units or application stacks. If the space represents a stable context for the projects it holds (meaning Octopus projects are unlikely to move between spaces, even as people move between teams or security requirements change), spaces are a convenient way to split projects and define security boundaries. This pattern is very easy to implement, as it often involves little more than creating a new space and assigning security permissions. We expect most Octopus users to naturally adopt spaces as their use of the platform grows. However, spaces do have some limitations. Because spaces belong to a single Octopus installation, and Octopus installations need a low latency connection to the database, spaces do not let you co-locate Octopus with geographically dispersed teams. Plus, all tasks initiated by spaces use a shared task queue. When projects in a space queue many tasks, other spaces have to wait for their deployments to be processed. This is commonly known as the "noisy neighbor" problem. | Feature | Solves | | ------------------------------------------------------- | ------ | | Independent projects, runbooks, dashboards etc | ✓ | | Task execution guarantees for business unit/application | ✕ | | Shared authentication settings | ✓ | | Synchronized projects, runbooks, dashboards etc | ✕ | | Supports geographically disperse business units | ✕ | | Robust RBAC support | ✓ | ## Independent instance per business unit/region ![Separate Instances diagram](/docs/img/platform-engineering/seperate-instances.png) Independent instances let geographically dispersed teams deploy a local Octopus instance. This provides better performance and greater reliability due to the reduced networking distance. Independent instances also grant each business unit an isolated task queue so deployments and management tasks aren't held up by other teams. Enterprises may also choose to deploy independent Octopus instances in the scope of PCI or other security regulations to perform deployments to secure environments. This frees teams from locking down their regular Octopus instance to meet specialized security requirements. Like the independent space pattern, the independent instance pattern is easy to implement. It only requires the deployment of another Octopus instance. However, due to the lack of centralized management of independent instances, you must configure common settings on each instance. These include: - Authentication - SMTP servers - Subscriptions - Audit log streaming - And more | Feature | Solves | | ------------------------------------------------ | ------ | | Independent projects, runbooks, dashboards etc | ✓ | | Task execution guarantees for team/customer | ✓ | | Shared authentication settings | ✕ | | Synchronized projects, runbooks, dashboards etc | ✕ | | Supports geographically disperse teams/customers | ✓ | | Robust RBAC support | ✓ | ## Tenant per customer ![Tenant per customer diagram](/docs/img/platform-engineering/tenants.png) Octopus has long supported partitioning deployment processes across multiple tenants, allowing each tenant to progress their own deployments independently. You can scope the RBAC rules in Octopus to tenants. This allows fine-grained access to resources like targets, accounts, and certificates. Tenants are a natural solution for teams that need to independently deploy applications to multiple downstream customers. You can also use tenants to represent concepts such as regions, release rings, or teams. However, the RBAC controls around tenants are not expressive enough to isolate customers if they log into the Octopus installation and you grant them permissions to see a single tenant. For example, you can't scope channels, tasks, and audit logs to a tenant. You can find more information about [tenants in our documentation](https://octopus.com/docs/tenants). | Feature | Solves | | ------------------------------------------------ | ------ | | Independent projects, runbooks, dashboards etc | ✕ | | Task execution guarantees for team/customer | ✕ | | Shared authentication settings | ✓ | | Synchronized projects, runbooks, dashboards etc | ✓ | | Supports geographically disperse teams/customers | ✕ | | Robust RBAC support | ✕ | ## Managed space per business unit/application ![Managed spaces diagram](/docs/img/platform-engineering/managed-spaces.png) This solution represents a typical "hub and spoke", or [platform engineering](https://octopus.com/devops/platform-engineering/), approach. Each application stack or business unit has its own space, and some or all of the space configuration is centrally managed. A tenant represents each space in the management space, also known as the upstream space. And deployment projects or runbooks configure the managed spaces, also known as downstream spaces. You can use the Terraform provider or raw API scripting to push configuration for shared resources, like template projects, to the managed spaces. | Feature | Solves | | ------------------------------------------------ | ------ | | Independent projects, runbooks, dashboards etc | ✓ | | Task execution guarantees for team/customer | ✕ | | Shared authentication settings | ✓ | | Synchronized projects, runbooks, dashboards etc | ✓ | | Supports geographically disperse teams/customers | ✕ | | Robust RBAC support | ✓ | ## Managed instance per business unit/region ![Managed instances diagram](/docs/img/platform-engineering/managed-instances.png) Like the "managed space per business unit/application" pattern, this represents a typical "hub and spoke", or [platform engineering](https://octopus.com/devops/platform-engineering/), approach. However, each business unit or region gets its own Octopus installation. A tenant represents each managed Octopus instance in the management (or upstream) space. And deployment projects or runbooks configure the managed (or downstream) Octopus instances. You can use the Terraform provider or raw API scripting to push configuration for shared resources, like template projects, to the managed instances. | Feature | Solves | | ------------------------------------------------ | ------ | | Independent projects, runbooks, dashboards etc | ✓ | | Task execution guarantees for team/customer | ✓ | | Shared authentication settings | ✓ | | Synchronized projects, runbooks, dashboards etc | ✓ | | Supports geographically disperse teams/customers | ✓ | | Robust RBAC support | ✓ | ## Facade space per customer ![Facade diagram](/docs/img/platform-engineering/facade.png) This pattern provides each customer with their own space. Each customer space has deployment projects or runbooks with a single step to call the associated project in the management space. These projects, therefore, act as a facade over the projects in the management space. This approach has the benefit of only requiring you to create very simple projects in each managed space. A tenant represents each customer in the management space, taking advantage of the built-in features of tenants. Customers log into their own space, providing a high degree of security. | Feature | Solves | | ------------------------------------------------ | ------------ | | Independent projects, runbooks, dashboards etc | not required | | Task execution guarantees for team/customer | ✕ | | Shared authentication settings | ✓ | | Synchronized projects, runbooks, dashboards etc | not required | | Supports geographically disperse teams/customers | ✕ | | Robust RBAC support | ✓ | ## Custom UI over Octopus Installation ![Custom UI diagram](/docs/img/platform-engineering/custom-ui.png) This is the most advanced pattern of all. It requires the development of a custom web user interface to orchestrate deployments with a back-end Octopus installation. The custom UI provides an almost unlimited ability to control and customize the end user's experience. This solution also allows orchestrating deployments across multiple Octopus installations from a single shared UI. You can find more information about the [Octopus REST API in our documentation](https://octopus.com/docs/octopus-rest-api). | Feature | Solves | | ------------------------------------------------ | ------------ | | Independent projects, runbooks, dashboards etc | ✓ | | Task execution guarantees for team/customer | ✓ | | Shared authentication settings | ✓ | | Synchronized projects, runbooks, dashboards etc | not required | | Supports geographically disperse teams/customers | ✓ | | Robust RBAC support | ✓ | ## Managed instance per environment ![Multiple environments diagram](/docs/img/platform-engineering/multiple-environments.png) This pattern creates Octopus installations in each environment. It treats Octopus upgrades and other maintenance tasks in the same manner as a regular application deployment by promoting the changes through environments like development, test, and production. You need to synchronize the Octopus installations to ensure their configuration is as similar to one another as possible. Unlike the previous patterns, this pattern is less concerned with providing the ability for teams and customers to log into Octopus installations. Rather, DevOps teams use non-production Octopus installations to test upgrades and validate project changes. This pattern may also use used to isolate Octopus installations for compliance reasons, such as PCI. Having a separate Octopus installation for the production environment makes it easy demonstrate access controls and other security measures when undertaking security audits. | Feature | Solves | | ------------------------------------------------ | ------ | | Independent projects, runbooks, dashboards etc | N/A | | Task execution guarantees for team/customer | N/A | | Shared authentication settings | N/A | | Synchronized projects, runbooks, dashboards etc | ✓ | | Supports geographically disperse teams/customers | N/A | | Robust RBAC support | N/A | ## Conclusion The patterns described in this section cover most implementations we expect enterprise customers will adopt as they scale their use of Octopus to support business units and customers. Some of these patterns require little effort to deploy or are deeply embedded into Octopus. These include: - [Independent space per business unit/application](/docs/administration/spaces) - [Independent instance per business unit/region](/docs/installation) - [Tenant per customer](/docs/tenants) The "custom UI over Octopus installation" is an advanced pattern that requires a dedicated development team to build a web application that consumes the Octopus REST API. You can refer to the [API documentation](https://octopus.com/docs/octopus-rest-api) for more information if you're interested in this pattern. The following patterns are implemented using the strategies documented in the [managing space resources](/docs/platform-engineering/managing-space-resources) and [managing project resources](/docs/platform-engineering/managing-project-resources) sections: - Managed space per business unit/application - Managed instance per business unit/region - Managed instance per environment # Azure Service Fabric Source: https://octopus.com/docs/deployments/azure/service-fabric.md This section contains resources for using Octopus to deploy your Azure Service Fabric applications. We assume you already have a Service Fabric cluster set up in Azure. If you don't yet, check out Microsoft's documentation on [getting started with Azure Service Fabric](https://azure.microsoft.com/en-us/services/service-fabric/) ## Service Fabric Deployment Targets in Octopus Octopus provides a built in Deployment Target for Azure Service Fabric clusters. Check out [this page](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets) for help setting up and configuring your target. ## Packaging Learn how to [package a Service Fabric application](/docs/deployments/azure/service-fabric/packaging) for use with Octopus Deploy. ## Step Templates Octopus comes with two built-in step templates facilitating deployment and management of Azure Service Fabric apps. - [Deploy a Service Fabric App](/docs/deployments/azure/service-fabric/deploying-a-package-to-a-service-fabric-cluster/#step-4-create-the-service-fabric-application-deployment-step) - [Run a Service Fabric SDK PowerShell Script](/docs/deployments/custom-scripts/service-fabric-powershell-scripts) ## Security modes Both step template types above require an authorized connection to a cluster. Octopus provides two options for connecting to Service Fabric clusters securely: 1. Using [Client Certificates](/docs/deployments/azure/service-fabric/connecting-securely-with-client-certificates). 1. Using [Azure Active Directory](/docs/deployments/azure/service-fabric/connecting-securely-with-azure-active-directory). ## Versioning Individual applications in a Service Fabric cluster have their own version numbers while the entire clustered app has a separate version number independent of its constituent parts. Octopus does not enforce a particular process for managing application/service versions. Learn more about using Octopus Deploy to [automate updates to the application/service versions](/docs/deployments/azure/service-fabric/version-automation-with-service-fabric-application-packages). ### Overwrite vs rolling upgrades The default behavior of the Service Fabric deployments is to overwrite an existing application. What this means is that if the application already exists in the cluster it will be removed first and the redeployed (you'll see it using *RegisterAndCreate* in the logs). The alternative approach is to use rolling deployments. To use this, add the following line to your publish profile in the source ```xml ``` It will then update each node in turn with the new version (you'll see it using *RegisterAndUpgrade* in the logs). # Deploying a sample Java application Source: https://octopus.com/docs/deployments/java/deploying-java-applications.md :::div{.hint} See [Java Applications](/docs/deployments/java) for details on deploying Java application servers. ::: This guide provides a simple example of deploying a Java application using Octopus Deploy. ## Prerequisites This guide assumes some familiarity with Octopus Deploy. You should be able to configure [projects](/docs/projects/) and have a [Tentacle or SSH deployment target](/docs/infrastructure) already configured. :::div{.hint} Naked scripting allows you to transfer and extract your package on remote targets without the need for Calamari or mono. [Read the short guide here](/docs/deployments/custom-scripts) for more details. ::: ## Sample application Here is a sample application that will prompt the user to press a key before exiting: **PressAnyKey.java** ```java public class PressAnyKey { public static void main(String[] args) throws java.io.IOException { System.out.println("Press any key to continue."); System.in.read(); } } ``` ## Deploying the application ### Step 1: Upload the application to the built-in repository In order to deploy the application with Octopus Deploy it must be compiled and packaged. This would usually be done by your build server but for the sake of this demonstration let's do it manually. 1. Compile the application: ```powershell javac PressAnyKey.java ``` 2. Zip PressAnyKey.class into the archive `PressAnyKey.1.0.0.zip` (you can download a sample: [PressAnyKey.1.0.0.zip](https://download.octopusdeploy.com/demo/PressAnyKey.1.0.0.zip)) 3. Upload `PressAnyKey.1.0.0.zip` to the Octopus Deploy built-in feed (**Deploy ➜ Manage ➜ Packages** or [follow the instructions here](/docs/packaging-applications/package-repositories/built-in-repository/#pushing-packages-to-the-built-in-repository)). ### Step 2: Create the project and deployment process 1. Create a new project called **Press Any Key**. 2. Add a **Deploy a package** step to the deployment process. 3. Configure the step to deploy the package `PressAnyKey.1.0.0.zip`. 4. Configure the step to run a [post-deployment script](/docs/deployments/custom-scripts) to start the application. **PowerShell** ```powershell Start-Process java PressAnyKey ``` **Bash** ```bash screen -d -m -S "PressAnyKey" java PressAnyKey ``` :::figure ![](/docs/img/deployments/java/5866219.png) ::: :::div{.hint} The application must be launched in a new process or session so that control returns to the shell. Otherwise, the deployment will wait until the application is terminated. ::: ### Step 3: Deploy Create a release and deploy. The application will be running on the target machine: ```powershell ubuntu@ip-10-0-0-245:/$ ps aux | grep 'PressAnyKey' ubuntu 6544 0.0 0.0 25776 1288 ? Ss 02:00 0:00 SCREEN -d -m -s PressAnyKey java PressAnyKey ubuntu 6545 0.0 0.7 2076112 28584 pts/2 Ssl+ 02:00 0:01 java PressAnyKey ``` # Configure Octopus Deploy project Source: https://octopus.com/docs/deployments/nginx/configure-octopus-deploy-project.md Assuming you are starting with a clean install of Octopus Deploy, the following steps will configure the server to deploy your [NGINX Sample Web App](/docs/deployments/nginx/create-and-push-asp.net-core-project) ASP.NET Core project to a Linux machine. ## Configure environment - On the *Environments* page, add an environment named **Production**. :::figure ![](/docs/img/deployments/nginx/images/production_environment.png) ::: :::div{.success} For the purpose of this guide we will only use the one deployment environment but there are several other pages in the documentation which explain the benefits of leveraging [environments](/docs/infrastructure/environments/) and [lifecycles](/docs/releases/lifecycles) to create advanced deployment processes. ::: ## Configure account and target To connect over SSH the first thing you will need to do is add the credentials for your machine. If you followed the previous "[Configuring Target Machine](/docs/deployments/nginx/configure-target-machine)" step this should consist of a username and password pair. - Navigate to **Environments ➜ Accounts ➜ Usernames/Passwords ➜ Add Account** and add these credentials. - In the **Production** environment click *Add deployment target* and select *SSH Connection*. - Enter the IP or DNS of the machine that is accessible to the Octopus Server. *In our case it's the public IP provided by Azure/AWS.* - Click *Discover* to automatically pre-populate the SSH fingerprint for the remote server. - Continue to fill out the rest of the details, selecting the account that you created above. :::div{.success} Further details are provided throughout the rest of this documentation about [SSH Targets](/docs/infrastructure/deployment-targets/linux/ssh-target). ::: ## Create deployment project The next step is to create a project that will extract the package. - Navigate to the Projects page via **Projects ➜ All** and then click the *Add Project* button. - Give the new project an appropriate name (for example *NGINXSampleWebApp*) and once saved, go to the project's *Process* page and click **Add Step ➜ Deploy to NGINX**. * Give the step a name (for example *Deploy NginxSampleWebApp*) * Ensure that the [target tag](/docs/infrastructure/deployment-targets/target-tags) matches that which was assigned to the machine in the previous step and * Select *NGINXSampleWebApp* as the Package ID. This Package ID is derived from the first part of the name of the package that was previously uploaded (see the *Package ID* section of the [Packaging Applications](/docs/packaging-applications/#package-id) documentation for more details). :::figure ![](/docs/img/deployments/nginx/images/deployment_process_name_role_and_package.png) ::: ### NGINX web server To configure NGINX to send traffic to your application you need to fill in a few details. | Field | Meaning | Examples | Notes | | ------------------------- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- | | **Host Name** | The `Host` header that this server will listen on. | `www.contoso.com` | **[Optional]** The value can be a full (exact) name, a wildcard, or a regular expression. A wildcard is a character string that includes the asterisk (`*`) at its beginning, end, or both; the asterisk matches any sequence of characters. Leave empty to use any `Host` header. | | **Bindings** | Specify any number of HTTP/HTTPS bindings that should be added to the NGINX virtual server. | | | | **Locations** | Specify any number of locations that NGINX should test request URIs against to send traffic to your application. | | | When defining **locations** you can configure NGINX to deliver files from the file system , or proxy requests to another server. For our sample application we want requests to `http:///` to deliver the `index.html` file from the `WWWRoot` folder of our ASP.NET Core project and requests to `http:///api/` to be proxied to our ASP.NET Core project running on `http://localhost:5000`. :::figure ![](/docs/img/deployments/nginx/images/deployment_process_nginx_feature.png) ::: ### Starting and managing our ASP.NET Core project To get the ASP.NET Core process started up you can manually call *dotnet .dll*, however this has its drawbacks when trying to run the process in the background of your deployment environments. Each time you deploy a new version of the package you would then have to stop the old version and start the newly deployed one. Without running the process through some intermediary process manager you would need to search for and kill the previous one from the process list, based on something like parsing its path to determine the correct one. A better approach is to use a process manager, for the purposes of this simple example we will use `systemd` (as nearly all Linux distributions use this process manager) to demonstrate how the web process might be managed. - Click the *Configure features* link at the bottom of the step and enable the *Custom deployment scripts* feature. - Add the following code as a **bash** script for the **post-deployment** phase. ** Post-deployment Bash script to configure systemd services ** ```bash SYSTEMD_CONF=/etc/systemd/system SERVICE_USER=$(whoami) DOTNET=/usr/bin/dotnet APPNAME=$(get_octopusvariable "Octopus.Action[Deploy NginxSampleWebApp].Package.NuGetPackageId") ENVIRONMENT=Production ROOTDIR=$(get_octopusvariable "Octopus.Action[Deploy NginxSampleWebApp].Output.Package.InstallationDirectoryPath") SYSTEMD_SERVICE_FILE=${SYSTEMD_CONF}/${APPNAME}.service if [ -f $SYSTEMD_SERVICE_FILE ]; then serviceRestartRequired=True fi # Application systemd service configuration echo "Creating ${APPNAME} systemd service configuration" cat > ${APPNAME}.service <<-EOF [Unit] Description=${APPNAME} service After=network.target [Service] WorkingDirectory=${ROOTDIR} User=${SERVICE_USER} Group=${SERVICE_USER} ExecStart=${DOTNET} ${ROOTDIR}/${APPNAME}.dll Restart=always RestartSec=10 SyslogIdentifier=${APPNAME} Environment=ASPNETCORE_ENVIRONMENT=${ENVIRONMENT} Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false [Install] WantedBy=multi-user.target EOF sudo mv ${APPNAME}.service ${SYSTEMD_CONF}/${APPNAME}.service # Application file watcher systemd service configuration echo "Creating ${APPNAME}-Watcher systemd service configuration" cat > ${APPNAME}-Watcher.service <<-EOF [Unit] Description=${APPNAME} File Watcher After=network.target [Service] Type=oneshot ExecStart=/bin/systemctl restart ${APPNAME}.service [Install] WantedBy=multi-user.target EOF sudo mv ${APPNAME}-Watcher.service ${SYSTEMD_CONF}/${APPNAME}-Watcher.service # Application path systemd service configuration echo "Creating ${APPNAME}-Watcher systemd path configuration" cat > ${APPNAME}-Watcher.path <<-EOF [Path] PathModified=${ROOTDIR} [Install] WantedBy=multi-user.target EOF sudo mv ${APPNAME}-Watcher.path ${SYSTEMD_CONF}/${APPNAME}-Watcher.path if [ "$serviceRestartRequired" == "True" ]; then echo "Restarting ${APPNAME} service" sudo systemctl restart ${APPNAME}.service sudo systemctl restart ${APPNAME}-Watcher.path else echo "Enabling and starting ${APPNAME} service" sudo systemctl enable ${APPNAME}.service sudo systemctl enable ${APPNAME}-Watcher.path sudo systemctl start ${APPNAME}.service sudo systemctl start ${APPNAME}-Watcher.path fi ``` ## Deploy - Create a new release and deploy it to the **Production** environment. The package will be uploaded to the server and unpacked, and the environment specific variables replaced in the appropriate config file. The custom post-deployment script will then start the service, passing in the correct environment to ensure the relevant config is loaded. Assuming you have followed all the previous steps to this guide you should now be able to make changes to your website, publish directly to Octopus and have it deploy as many times as you like. Navigating to the host machine after deploying to the *Production* environment should then result in our static AngularJS application being served up and looks something like this: :::figure ![](/docs/img/deployments/nginx/images/production_deployment_homepage.png) ::: Navigating to `Fetch data` will call the backend to retrieve the data and should result in a page that looks something like this: :::figure ![](/docs/img/deployments/nginx/images/production_deployment_fetchdata_page.png) ::: Navigating to the backend directly (by entering `http:///api/SampleData/WeatherForecasts` into the browser address bar) should return something like this: :::figure ![](/docs/img/deployments/nginx/images/production_deployment_api_result.png) ::: ## Learn more - Generate an Octopus guide for [NGINX and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=NGINX). # Keeping deployment targets up to date Source: https://octopus.com/docs/deployments/patterns/elastic-and-transient-environments/keeping-deployment-targets-up-to-date.md Octopus Deploy can ensure that deployment targets are kept up to date with the relevant releases. This can be useful when [deploying to transient targets](/docs/deployments/patterns/elastic-and-transient-environments/deploying-to-transient-targets) or when new deployment targets are added to an environment. ## Triggers {#triggers} Triggers are per-project settings that execute an action in response to an event. For this example we will create an automatic deployment trigger so that machines associated with the **TradingWebServer** [target tag](/docs/infrastructure/deployment-targets/#target-roles) are automatically kept up to date with the latest releases for OctoFX. Triggers can be found by selecting the *Triggers* menu item on the project screen. ## Creating an automatic deployment trigger {#create-automatic-deployment-trigger} 1. Navigate to the project *Triggers* page. 2. Create a new trigger by selecting **Create trigger**: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865570.png) ::: 3. Add events to the trigger. - For **Octopus 3.6** and above, select the event group *"Machine becomes available for deployment"*. 4. Select the environments (**Test A**) that this trigger applies to. 5. Select the deployment target tags (**TradingWebServer**) that this trigger applies to. :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865705.png) ::: Once the trigger has been created, it will ensure that any deployment targets matching the trigger criteria will be kept up to date with the latest release of the project. ## Triggering an automatic deployment {#trigger-automatic-deployment} To test the trigger, we will disable a deployment target, deploy to that target's environment and then re-enable the target. Octopus should automatically deploy the release to the target when it is re-enabled. 1. Disable a target with the target tag **TradingWebServer** in the **Test A** environment: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865573.png) ::: 2. Create a new release of OctoFX and deploy it to the **Test A** environment. It will skip the steps that have been scoped to the **TradingWebServer** target tag because no deployment targets are associated with that tag: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865574.png) ::: 3. Enable the deployment target **TAWeb01.** Octopus will automatically determine that it is missing the release we just deployed. The deployment is re-queued and will run only for the **TAWeb01** target, creating a new log section below the original deployment log: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865575.png) ::: ## Overriding the release used for automatic deployments {#override-release-for-automatic-deployments} Automatic deployments attempts to calculate the release to use for a project and environment (using the *current* and *successful* release that has been deployed, as shown in your Project Overview dashboard). In some cases the calculated release may not be the release that should be automatically deployed, or Octopus may not be able to find a deployment for an environment (maybe you have a release, but have not yet deployed it anywhere). It is possible to explicitly set the release that should be automatically deployed by overriding the automatic-deployment-release. Overrides can be configured using the [Octopus CLI](/docs/octopus-rest-api/octopus-cli/) or through [Octopus.Client](/docs/octopus-rest-api/octopus.client). Overrides define a release for a project when deploying to an environment (this can, for example, be useful for cloud-testing-automation when standing up new cloud infrastructure). For multi-tenanted deployments, overrides may be configured for each environment/tenant combination. **Octopus.Client** ```powershell Add-Type -Path 'Octopus.Client.dll' $octopusURI = 'https://your-octopus-url' $apiKey = 'API-YOUR-KEY' $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURI, $apiKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $project = $repository.Projects.Get("Projects-1") $environment = $repository.Environments.Get("Environments-1") $release = $repository.Releases.Get("Releases-1") $project.AddAutoDeployReleaseOverride($environment, $release) $repository.Projects.Modify($project) ``` Automatic deployment overrides are cleared when a deployment is performed to the same project/environment/tenant combination as the override. For example: if an override is set for version 1.2 of HelloWorld to the Test environment and version 1.3 of HelloWorld is deployed to the Test environment, the 1.2 override will be deleted. Release overrides will be cleared as soon as they have automated an actual deployment. ## Troubleshooting automatic deployments {#troubleshoot-automatic-deployments} Octopus will attempt to automatically deploy the current releases for the environments that are appropriate for a machine. The current release is the one that was most recently *successfully* deployed as shown on the [project dashboard](/docs/projects/project-dashboard). If a release is deployed and it fails, the previous successful release will continue to be automatically deployed. Octopus will not attempt automatic deployments for a project/environment/tenant while a release is being deployed to that project/environment/tenant. Once the deployment finishes, Octopus will deploy to any machines that require the deployment. Troubleshoot automatic deployment by viewing the auto deploy logs from the diagnostics page in the configuration section or viewing the [Audit log](/docs/security/users-and-teams/auditing). :::div{.success} **Why isn't my trigger working?** The verbose logs usually contain the reason why a project trigger didn't take any action. For example: `Auto-deploy: Machine 'Local' does not need to run release '2.6.6' for project 'My Project' and tenant because it already exists on the machine or is pending deployment.` ::: ## Next steps {#next-steps} With machines now being kept up to date automatically you may be interested in [cleaning up environments](/docs/deployments/patterns/elastic-and-transient-environments/cleaning-up-environments) to automatically remove machines when they are terminated. ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Web App reference architecture in Octopus Source: https://octopus.com/docs/getting-started/reference-architectures/webapp-reference-architecture.md ## Azure Web App reference architecture The [Octopus - Web App Reference Architecture](https://library.octopus.com/step-templates/87b2154a-5c8d-4c31-9680-575bb6df9789/actiontemplate-octopus-eks-reference-architecture) step populates an existing Octopus space with deployment projects demonstrating how DevOps teams can deploy applications to the Azure Web App platform. ### Configuring the step Hosted Octopus users should use the `Hosted Ubuntu` worker pool and run the step with the `octopuslabs/terraform-workertools` container image accessed via the `Container Images` feed. On-premises Octopus users need to ensure the step is run on a worker with a recent version of Terraform installed, or can use the `octopuslabs/terraform-workertools` container image on a worker with Docker installed. The step exposes a number of options, typically requesting credentials to the various platforms that are configured to support Azure Web App deployments: * `Azure account application ID`, `Azure account subscription ID`, `Azure account tenant ID`, and `Azure account password` require the details associated with a [service principal](https://learn.microsoft.com/en-us/purview/create-service-principal-azure) used to access the Azure platform. * `Docker Hub Username` and `Docker Hub Password` require the credentials of a [Docker Hub user](https://docs.docker.com/docker-id/) that is used to access sample Docker images from public DockerHub repositories. These credentials are also used by a sample GitHub Actions workflow that publishes Docker images. * `GitHub Access Token` requires the [GitHub access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) of a user that is used to create a new GitHub repository holding a sample application. * `Octopus API Key` requires an [API key](https://octopus.com/docs/octopus-rest-api/how-to-create-an-api-key) to the Octopus instance where the reference architecture projects and supporting resources are created. * `Octopus Space ID` requires the space ID where the reference architecture projects and supporting resources are created. Leave the default value to populate the same space as the runbook. * `Octopus Server URL` requires the URL of the Octopus instance where the reference architecture projects and supporting resources are created. Leave the default value to populate the same instance as the runbook. * `Optional Terraform Apply Args` allows custom arguments to be passed to the `terraform apply` command. The Terraform module applied by this step exposes a number of optional variables that can be defined as apply arguments. These arguments can be defined by setting this field to a value like `-var=project_template_project_name=renamed -var=infrastructure_project_name=renamed2 -var=frontend_project_name=renamed3 -var=products_project_name=renamed4 -var=audits_project_name=renamed5`: * `infrastructure_project_name` defines the name of the `_ Azure Web App Infrastructure` project * `project_template_project_name` defines the name of the `Docker Project Templates` project * `frontend_project_name` defines the name of the `Azure WebApp Octopub Frontend` project * `products_project_name` defines the name of the `Azure WebApp Octopub Products` project * `audits_project_name` defines the name of the `Azure WebApp Octopub Audits` project * `Optional Terraform Init Args` allows custom argument to be passed to the `terraform init` command. Leave this field blank unless you have a specific use case. ### Reference projects The step creates a number of reference projects demonstrating how to deploy applications to an Azure web app. The `_ Azure Web App Infrastructure` project contains a runbook called `Create Web App`. This runbook creates an [Azure service plan](https://learn.microsoft.com/en-us/azure/app-service/overview-hosting-plans) and three [Azure web app services](https://azure.microsoft.com/en-au/products/app-service/web) - one for each of the sample microservices deployed by the other projects. The `Azure WebApp Octopub Audits`, `Azure WebApp Octopub Frontend`, `Azure WebApp Octopub Products` projects deploy the [Octopub](https://github.com/OctopusSolutionsEngineering/Octopub) sample application to an Azure Web App, performs a smoke test, and scans the [SBOM](https://www.cisa.gov/sbom) associated with each image using [Trivy](https://aquasecurity.github.io/trivy/). Each of these projects have a number of supporting runbooks to inspect Kubernetes resources. The `_ Deploy Azure Web App Octopub Stack` project uses the [Deploy a release](/docs/projects/coordinating-multiple-projects/deploy-release-step) step to orchestrate the deployment of the individual microservices that make up the Octopub sample application. Orchestration projects provide a convenient way of promoting multiple related releases between environments in a predefined order, which may be required when applications are tightly bound or a well-defined set of release versions must be installed as a group. The `Docker Project Templates` project contains a runbook called `Create Template Github Node.js Project` that: 1. Creates a new GitHub repository 2. Adds [Github Actions secrets](https://docs.github.com/en/rest/actions/secrets) to allow [workflows](https://docs.github.com/en/actions/using-workflows/about-workflows) to interact with the Octopus server and the DockerHub repository 3. Populates the repo with a sample Node.js web application and GitHub Actions workflow to build the application, push it to DockerHub, and create a release in Octopus This runbook is an example of platform engineering where DevOps teams can bootstrap sample applications with best practices such as versioning, security scanning, and CI/CD pipelines provided as part of a common base template. # Installing the Tentacle VM extension via the Azure Portal Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-the-azure-portal.md :::div{.problem} The VM extension is deprecated and no longer supported. All customers using the VM extension should migrate to [DSC](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). ::: After creating a virtual machine on Azure using the management portal, browse to the virtual machine, then click on **Extensions**: :::figure ![Azure VM Properties - Extensions Tab](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/azure-portal-extensions-menu-item.png) ::: Click **Add** to add a new extension. :::figure ![Azure VM Properties - Add extensions button](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/azure-portal-extensions-add.png) ::: Select the **Octopus Deploy Tentacle Agent** extension, and click **Create**. :::figure ![Add Extension - Create Octopus Deploy Tentacle Agent](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/azure-portal-extensions-about-extension.png) ::: Fill in the settings, and click **OK**. :::figure ![ Octopus Deploy Tentacle Agent properties](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/azure-portal-extensions-extension-properties.png) ::: A deployment will be initiated which adds the extension to your virtual machine. ## Settings The settings for the extension are: **Octopus Server URL**: URL to your Octopus Server. You'll need your own Octopus Server (possibly also running on Azure), and you should [consider using HTTPS](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https/). The extension will use the [Octopus REST API](/docs/octopus-rest-api) against this URL to register the machine. **API Key**: [Your API key](/docs/octopus-rest-api/how-to-create-an-api-key/). This key will only be used when registering the machine with the Octopus Server; it isn't used for [subsequent communication](/docs/security/octopus-tentacle-communication). **Environments**: The name of the [environment](/docs/infrastructure/environments) to add the machine to. You can specify more than one by using commas; for example: `UAT1,UAT2`. **Roles**: The [target tags](/docs/infrastructure/deployment-targets/target-tags) to give to the machine. Again, separate them using commas for more than one, for example: `web-server,app-server`. **Communication Mode**: How the Tentacle will communicate with the server - it will either use **Polling** to reach out to the server, or **Listening** to wait for connections from the server. **Port**: The port on which the server should contact the Tentacle (if Tentacle is set to Listen), or the port on which the Tentacle should contact the server (if in Polling mode). in Polling mode, the default value is 10943. In Listening mode, the default value is 10933. **Public Hostname Configuration**: When in **Listening** mode, you can specify how the Server should address the Tentacle. You can specify **Public IP** to use the public IP address (as returned from [api.ipify.org](https://api.ipify.org)), **FQDN** to use the fully qualified domain name (useful for Active Directory networks), **ComputerName** to use the local hostname, or **Custom** to specify your own value. **Custom Public Hostname**: When in **Listening** mode, and the **Public Hostname Configuration** is set to **Custom**, you can supply the dns name/ip address the Server should use. After entering the extension settings, click **OK**, and the extension will be installed. After a few minutes, the machine should appear in the environments tab of your Octopus Server. If it doesn't, please read the [Diagnosing issues](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/diagnosing-issues) section. :::div{.hint} If you need the ability to customize more of the installation, the [CLI](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-the-azure-cli/) and [PowerShell](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-powershell/) methods expose more options than the Azure Portal. For even more customization, you might want to consider using the [Azure Desired State Configuration (DSC) extension](https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview) in conjunction with the [OctopusDSC](https://www.powershellgallery.com/packages/OctopusDSC) resource. ::: # Built-in Worker Source: https://octopus.com/docs/infrastructure/workers/built-in-worker.md Octopus Server comes with a built-in worker which enables you to conveniently run parts of your deployment process on the Octopus Server without the need to install a Tentacle or other deployment target. This is very convenient when you are getting started with Octopus Deploy, but it does come with several security implications. This page describes how to configure the built-in worker for a variety of scenarios. :::div{.hint} The built-in worker is only available on [self-hosted Octopus](/docs/getting-started#self-hosted-octopus) instances. [Octopus Cloud](/docs/octopus-cloud) customers have access to [dynamic worker pools](/docs/infrastructure/workers/dynamic-worker-pools), which provides a pre-configured worker on-demand. ::: ## Built-in Worker When the built-in Worker is executed, the Octopus Server spawns a new process for Calamari. This conveniently allows a default Octopus set up to enable features like running script steps on the server and Azure deployments. However, this convenience comes at a cost: **security**. ## Default configuration By default, Octopus Server runs as the highly privileged `Local System` account on Windows. We typically recommend running Octopus Server as a different account, either a User or Managed Service Account (MSA), so you can grant specific privileges to that account. When you first install Octopus Server the built-in worker is configured to run using the same user account as the Octopus Server itself. This means your deployment process can do the same things the Octopus Server can do. ## Running tasks on the Octopus Server as a different user You can configure the built-in worker to execute tasks as a different user account. This user account can be a down-level account with very restricted privileges. ``` Octopus.Server.exe builtin-worker --username=OctopusWorker --password=XXXXXXXXXX ``` All tasks which execute using the built-in worker will run as that user account. The only gotcha is that the user account running the Octopus Server must be a member of the `BUILTIN\Administrators` group to launch new processes as the built-in worker user and impersonate the built-in worker user. This same command-line tool can automatically configure the correct user accounts on the local machine, and wire it all up for you. ``` Octopus.Server.exe builtin-worker --auto-configure ``` Which results in something like this: ``` Creating a user account on the local machine called 'OctopusServer' and adding it to the 'BUILTIN\Administrators' group. Granting the 'SeServiceLogonRight' privilege to the 'MACHINE-123\OctopusServer' user account. Configuring the 'OctopusDeploy' Windows Service to start as the 'MACHINE-123\OctopusServer' user account. [SC] ChangeServiceConfig SUCCESS Creating a down-level user account on the local machine called 'OctopusWorker' for the built-in worker. The built-in worker is now configured to execute scripts as MACHINE-123\OctopusWorker. Testing the built-in worker configuration. Built-in worker: SUCCESS The success of this configuration depends on both the source and target user accounts. Current User: MACHINE-123\admin-user Target User: MACHINE-123\OctopusWorker Step 1: Testing credentials... PASSED Step 2: Testing thread impersonation... PASSED Step 3: Testing process impersonation... PASSED Step 4: Check the process impersonation worked as expected... PASSED NOTE: This test succeeded when starting from the user account 'MACHINE-123\admin-user'. If the Octopus Server usually runs as a different user account these results may vary. The same test will be run each time the Octopus Server starts to be certain the built-in worker is configured correctly. These changes require a restart of the Octopus Server. ``` ## Switching off the built-in Worker The built-in Worker can be switched off. If it is switched off, then the Octopus Server does not invoke Calamari locally. This will mean deployments containing steps that would have run on the built-in worker (Azure, AWS, Terraform, scripts steps targeted at the server) will fail unless an [external worker](/docs/infrastructure/workers) is provisioned. Toggle the built-in worker on or off from the **Configuration ➜ Features** page. The built-in Worker will also not be used if any workers are added to the [default worker pool](/docs/infrastructure/workers/worker-pools), but, unless it is switched off, Octopus will revert to using the built-in worker if all workers are later removed from the default pool. Note that [some steps](/docs/infrastructure/workers/#Where-steps-run) run inside the Octopus Server process (not using Calamari), don't need a worker and are not affected by this setting. ## Troubleshooting You cannot run the Octopus Server as the `Local System` account and successfully launch the built-in worker as a different user account. Please use the `--auto-configure` option, or create a user account as a member of the `BUILTIN\Administrators` group. ## Learn more - [Worker blog posts](https://octopus.com/blog/tag/workers/1) # octopus account aws Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-aws.md Manage AWS accounts in Octopus Deploy ```text Usage: octopus account aws [command] Available Commands: create Create an AWS account help Help about any command list List AWS accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account aws [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account aws list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Project coordination code samples Source: https://octopus.com/docs/projects/coordinating-multiple-projects/project-coordination-code-samples.md These samples show how to perform various tasks related to project coordination. See the [OctopusDeploy-Api](https://github.com/OctopusDeploy/OctopusDeploy-Api) repository for further API documentation and examples using the [raw REST API](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/REST/PowerShell) or Octopus.Client in [C#](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client/Csharp), [PowerShell](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client/PowerShell) or [LINQPad](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client/LINQPad). :::div{.success} These examples use the [Octopus.Client](/docs/octopus-rest-api/octopus.client/) library, see the [Loading in an Octopus Step](/docs/octopus-rest-api/octopus.client/using-client-in-octopus/) section of the [Octopus.Client](/docs/octopus-rest-api/octopus.client) documentation for details on how to load the library from inside Octopus using PowerShell or C# Script steps. ::: ## Querying the current state The best way to get the current state for one or more projects is to use the Dashboard API, which is also used by the dashboards in the WebUI: **Octopus.Client** ```csharp var globalDashboard = repository.Dashboards.GetDashboard().Items; var projectDashboard = repository.Dashboards.GetDynamicDashboard(projects, environments).Items ``` **PowerShell** ```powershell $repository.Dashboards.GetDashboard().Items ``` **Http** ```js http://localhost/api/dashboard ``` ## Viewing recent deployments The following code returns the deployments started in the last 7 days: ```csharp var projects = repository.Projects.FindAll().Select(p => p.Id).ToArray(); var environments = repository.Environments.FindAll().Select(e => e.Id).ToArray(); List recentDeployments = new List(); var after = DateTimeOffset.Now.AddDays(-7); repository.Deployments.Paginate(projects, environments, page => { recentDeployments.AddRange(page.Items.Where(d => d.Created >= after)); // Deployments are returned most recent first return page.Items.All(i => i.Created >= after); } ); ``` ## Promoting a group of projects This example finds all the releases that are in UAT but not Production. It then queues them for deployment to Production and waits for them to complete. ```csharp var environments = repository.Environments.GetAll(); var testEnvId = environments.First(e => e.Name == "UAT").Id; var prodEnvId = environments.First(e => e.Name == "Prod").Id; var current = repository.Dashboards.GetDashboard().Items; var toBePromoted = from d in current where d.EnvironmentId == testEnvId && d.State == TaskState.Success let prod = current.FirstOrDefault(p => p.EnvironmentId == prodEnvId && p.ProjectId == d.ProjectId && p.TenantId == d.TenantId) where prod == null || prod.ReleaseId != d.ReleaseId select new DeploymentResource { ProjectId = d.ProjectId, ReleaseId = d.ReleaseId, ChannelId = d.ChannelId, TenantId = d.TenantId, EnvironmentId = prodEnvId }; var tasks = toBePromoted .Select(d => repository.Deployments.Create(d)) .Select(d => repository.Tasks.Get(d.TaskId)) .ToArray(); repository.Tasks.WaitForCompletion(tasks, timeoutAfterMinutes: 0); var completed = repository.Tasks.Get(tasks.Select(t => t.Id).ToArray()); if(completed.Any(c => c.State != TaskState.Success)) throw new Exception("One or more projects did not complete successfully"); ``` ## Queuing a project to run later This example re-queues the currently executing project at 3am the next day. ```csharp var releaseId = OctopusParameters["Octopus.Web.ReleaseLink"].Split('/').Last(); var tomorrow3amServerTime = new DateTimeOffset(DateTimeOffset.Now.Date, DateTimeOffset.Now.Offset).AddDays(1).AddHours(3); repository.Deployments.Create( new DeploymentResource() { ReleaseId = releaseId, ProjectId = OctopusParameters["Octopus.Project.Id"], ChannelId = OctopusParameters["Octopus.Release.Channel.Id"], EnvironmentId = OctopusParameters["Octopus.Environment.Id"], QueueTime = tomorrow3amServerTime } ); Console.WriteLine($"Queued for {tomorrow3amServerTime}"); ``` ## Failing a deployment if another deployment is running This example uses the dynamic dashboard API to check whether a different project is currently deploying to the same environment. Note that Octopus [restricts](/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously) what can run at the same time already. ```csharp var otherProject = repository.Projects.FindByName("Other Project"); var environmentId = OctopusParameters["Octopus.Environment.Id"]; var dash = repository.Dashboards.GetDynamicDashboard(new[] { otherProject.Id }, new[] { environmentId }); if (dash.Items.Any(i => i.State == TaskState.Queued || i.State == TaskState.Executing)) throw new Exception($"{otherProject.Name} is currently queued or executing"); ``` ## Failing a deployment if a dependency is not deployed This example retrieves the last release to the same environment of a different project and fails if it is not the expected release version. ```csharp var requiredVersion = OctopusParameters["OtherProjectRequiredVersion"]; var otherProject = repository.Projects.FindByName("Other Project"); var environmentId = OctopusParameters["Octopus.Environment.Id"]; var dash = repository.Dashboards.GetDynamicDashboard(new[] { otherProject.Id }, new[] { environmentId }); var last = dash.Items.SingleOrDefault(i => i.IsCurrent); if (last == null || last.ReleaseVersion != requiredVersion) throw new Exception($"This project requires version {requiredVersion} of {otherProject.Name} to be deployed to the same environment"); ``` ## Triggering and waiting for another project This example finds the latest release for a different project and deploys it if it is not currently deployed to the environment. ```csharp var environmentId = OctopusParameters["Octopus.Environment.Id"]; var otherProject = repository.Projects.FindByName("Other Project"); var latestRelease = repository.Projects.GetReleases(otherProject).Items.FirstOrDefault(); var dash = repository.Dashboards.GetDynamicDashboard(new[] { otherProject.Id }, new[] { environmentId }); var last = dash.Items.Single(i => i.IsCurrent); if (latestRelease != null && last.ReleaseId != latestRelease.Id) { var deployment = repository.Deployments.Create( new DeploymentResource() { ReleaseId = latestRelease.Id, ProjectId = latestRelease.ProjectId, ChannelId = latestRelease.ChannelId, EnvironmentId = environmentId, } ); var task = repository.Tasks.Get(deployment.TaskId); repository.Tasks.WaitForCompletion(task); } ``` ## Waiting for another project to reach a certain stage This example builds on the previous, by waiting until a particular step is complete instead of the whole task. ```csharp // instead of the line repository.Tasks.WaitForCompletion(task); ActivityStatus step1Status = ActivityStatus.Pending; do { Thread.Sleep(1000); var details = repository.Tasks.GetDetails(task); var log = details.ActivityLogs.Single(); if (log.Status != ActivityStatus.Pending) step1Status = log.Children.Single(c => c.Name.StartsWith("Step 1:")).Status; step1Status.Dump(); } while (step1Status == ActivityStatus.Pending || step1Status == ActivityStatus.Running); task = repository.Tasks.Refresh(task); ``` # Scheduled deployment triggers Source: https://octopus.com/docs/projects/project-triggers/scheduled-deployment-trigger.md Scheduled deployment triggers allow you to define an unattended behavior for your [projects](/docs/projects) that will cause an automatic deployment of a release based on a defined recurring schedule. ## Schedule Scheduled deployment triggers provide a way to configure your projects to create, deploy, and promote releases on a defined schedule. This can useful in different scenarios, for instance: * Run a deployment to clean up your test environments once a day at 9:00pm. * Run a deployment to health check your services every hour. * Run a deployment to provision a new test environment at 6:00am, Monday - Friday. * Run a deployment to promote the latest build from staging to production on the 1st day of the month. * Run a deployment to perform maintenance on the last Saturday of the month. ## Add a scheduled trigger 1. In a project, select **Triggers**, then **Add Trigger ➜ Scheduled**. 2. Give the trigger a name. 3. Set the trigger schedule. The options give you control over how frequently the trigger will run and at what time. You can schedule a trigger based on either days of the week, or dates of the month. You can also use a [CRON expression](#cron-expression) to configure when the trigger will run. 4. Select the action the trigger should take when executed. - **Deploy latest release** re-deploys a release or promote a release between environments. You need to specify the **source environment** and the **destination environment**. The latest successful release in the source environment will be deployed to the destination environment. - **Deploy new release** which will create new release and deploy it to the environment you specify in the **destination environment**. - **Deploy latest release to an environment** deploy the latest release to an environment in the first phase of the selected channel's lifecycle. You need to specify the **destination environment**. If you are using [channels](/docs/releases/channels) you may also select the channel to use when deploying the release. The latest successful deployment for the specified channel and source environment will be deployed to the same channel and destination environment. If no channel is specified, the latest successful release from any channel and source environment will be selected for deployment. If you are using [tenants](/docs/tenants) you can select the tenants that will receive a deployment. For each tenant, the latest successful release in the source environment will be deployed to the destination environment. When a tenant is not connected to the source environment, the latest successful release that has been deployed to the source environment and meets the lifecycle requirements for promotion to the destination environment will be deployed. 5. Save the trigger. :::div{.hint} All schedule options run based on CRON expressions. The other options provide a convenient way of setting up the schedule without worrying about the syntax. A custom CRON expression provides you with more fine-grained control over the exact schedule. ::: ### Using CRON expressions {#cron-expression} CRON expressions allow you to configure a trigger that will run according to the specific CRON expression. Example: `0 0 06 * * Mon-Fri` Runs at 06:00 AM, Monday through Friday. :::div{.success} The CRON expression must consist of all 6 fields, there is an optional 7th field for "Year". ::: | Field name | Allowed values | Allowed special characters | Required | | ------------- |:-------------------- |:--------------------------- | :------: | | Seconds | 0-59 | * , - / | Y | | Minutes | 0-59 | * , - / | Y | | Hours | 0-23 | * , - / | Y | | Day of month | 1-31 | * , - / ? L W | Y | | Month | 1-12 or JAN-DEC | * , - / | Y | | Day of week | 0-6 or SUN-SAT | * , - / ? L # | Y | | Year | 0001–9999 | * , - / | N | # Creating a release Source: https://octopus.com/docs/releases/creating-a-release.md ## How to create a release in Octopus Deploy 1. With your deployment process defined, you can create a release on the project's Overview page, by clicking **CREATE RELEASE**. :::figure ![Create release](/docs/img/shared-content/releases/images/create-release.png) ::: 2. Depending on the type of steps you configured in the deployment process, there could be additional options available, for instance, if you're using a step to deploy a package, there will be a package section where you can specify which version of the package to use in the release. 3. Give the release a version number, add any release notes you'd like to include, and click **SAVE**. You can fully automate your build and deployment pipeline, so that the releases are generally created automatically. For more information on this topic, see our [build server documentation](/docs/packaging-applications/build-servers). ## Releases By navigating to the project's overview page and selecting **Releases**, you can see all the releases that have been created for the project. If you want to deploy a release or [schedule a deployment](#scheduling-a-deployment), click on the release. ## Deploy your releases After creating the release, if the [lifecycle](/docs/releases/lifecycles) associated with the project is configured to deploy automatically to its first environment, the release will start to be deployed as soon as the release is created. If the release is not deployed automatically, you can click **DEPLOY TO (Environment)** where *(Environment)* is the first environment in the project's lifecycle. Alternatively, you can click **Deploy to...** to select a specific environment to deploy to. ### Schedule a deployment {#scheduling-a-deployment} 1. Select the release you want to schedule for deployment. 1. Click **DEPLOY TO...** or **DEPLOY TO (Environment)**. 1. If you selected **DEPLOY TO...**, select the environment to be deployed to. 1. Expand the **WHEN** section and select **later**. 1. Specify the time and date you would like the deployment to run. Note, deployments can only be scheduled for 30 days in advance. 1. Specify a timeout period. If the deployment does not start within the specified timeout period, the deployment will not run. 1. Click **SAVE**. Deployments scheduled for the future can be viewed under the Project Overview page, on the **Dashboard**, and the **Tasks** section of the Octopus Web Portal. ### Schedule deployments with the Octopus CLI For everyone using the [Octopus CLI](/docs/octopus-rest-api/cli), you can use the following option: ```powershell octopus release deploy --deploy-at "2014-07-12 17:54:00 +11:00" --project HelloWorld -- version 1.0.0 --environment Production ``` ### Exclude steps from releases 1. Select the release you want to deploy. 1. Click **DEPLOY TO...** or **DEPLOY TO (Environment)**. 1. If you selected **DEPLOY TO...**, select the environment to be deployed to. 1. Expand the **Excluded steps** section and use the check-box to select steps to excluded from the deployment. 1. Click **SAVE**. ### Modify the guided failure mode Guide failure mode asks for users to intervene when a deployment encounters an error. Learn more about [guided failures](/docs/releases/guided-failures). 1. Select the release you want to deploy. 1. Click **DEPLOY TO...** or **DEPLOY TO (Environment)**. 1. If you selected **DEPLOY TO...**, select the environment to be deployed to. 1. Expand the **Failure mode** section, and select the mode you want to use. 1. Click **SAVE**. ### Deploy to a specific subset of deployment targets You can deploy releases to a specific subset of deployment targets. 1. Select the release you want to deploy. 1. Click **DEPLOY TO (Environment)**. 1. Expand the **Preview and customize** section. 1. Expand the **Deployment Targets** section. 1. Select your target selection method: - **Include all applicable deployment targets** (default) - **Include specific deployment targets**: Choose individual targets to include - **Exclude specific deployment targets**: Choose individual targets to exclude - **Include specific target tags**: Include targets with selected tags - **Exclude specific target tags**: Exclude targets with selected tags 1. Click **DEPLOY**. ### Variable snapshot For each release you create, a snapshot is taken of the project variables. You can review the variables for a release from within a project: 1. Using the project side menu, navigate to **Deployments ➜ Releases** 1. Select the release that you wish to view the variable snapshot for 1. On the release page scroll to the **Variable Snapshot** section 1. Click **SHOW SNAPSHOT** This lets you see the variables as they existed when the release was created. :::figure ![The Octopus release screen with variable snapshots highlighted](/docs/img/releases/images/release-variable-snapshot-section.png) ::: You can update the variables by clicking **UPDATE VARIABLES**. This can be useful when: - The release has not been deployed yet, but the variables have changed since the release was created. - The release needs to be **redeployed** and the variables have changed since the release was created. - The release failed to deploy due to a problem with the variables and you need to update the variables and redeploy the release. After you've updated the variables, the release will use the updated variables when it is deployed. #### Variable snapshot for Git projects The variable snapshot for Git projects is a combination of the variables on the selected branch and the sensitive variables stored in the database. When updating the variable snapshot, the new snapshot is taken from the current tip of the Git reference that was used to create the release. If this reference no longer exists, the variable snapshot cannot be updated. :::figure ![Screenshot of Octopus Release page showing process snapshot with Git reference main and commit 047cb76 and variable snapshot with reference main and commit 617aa79](/docs/img/releases/git-variables-release-snapshot.png) ::: Updating the variable snapshot *only* updates the variables (and not the deployment process). After updating, the commit for the process snapshot and variables snapshot will be different. ## Custom fields Releases can have custom fields added to them when being created. Custom fields are a set of key/value pairs of data that can be used: - As part of naming of ephemeral environments. - During deployments within scripts and other steps. :::div{.hint} Support for custom fields on releases is available from version `2025.4`. ::: :::div{.hint} Support for custom fields on releases is available from v2.19.0 of the Octopus CLI. ::: ### Required custom fields Channels can define which custom fields are required when creating a release within the channel. The Octopus Web Portal will prompt you to provide the value for any required custom fields when creating a release. Learn more about [configuring custom fields in channels](/docs/releases/channels). :::figure ![Screenshot of Octopus release page showing entering the value of a custom field for a Pull Request Number](/docs/img/releases/images/create-release-custom-fields.png) ::: ### Using custom fields in scripts and steps Custom fields can be used within scripts and steps with the variable `#{Octopus.Release.CustomFields[_name_]}`. ### Restrictions The following restrictions apply to custom fields on releases: - A maximum of 10 custom fields can be added to each release. - The maximum length of the key and value of each custom field is 150 characters. ## Create a release based on a previous release Sometimes you may need to create a new release based on a previous release, for example, if a defect is discovered during testing which requires making changes to a package used by the release before publishing a new version. This can be done via the release summary page. :::figure ![Screenshot of Octopus release page showing an existing release and its selected packages](/docs/img/releases/images/octopus-existing-release-selected-packages.png) ::: From your existing release, select the **Copy** option. You will be redirected to the create release page. :::figure ![Screenshot of Octopus release page showing new Copy option in the overflow menu](/docs/img/releases/images/octopus-releases-copy-release.png) ::: The following release properties will be pre-populated: - Channel of the source release - Version of the source release - Git reference from the source release, if it's a version-controlled project - *If the source release is created from the `features/dark-mode` branch, then the new one should use the same (the head commit may be different in this case). If the source uses a specific commit, the new release should use the same commit.* - All package versions from the source release - All Git resources from the source release - Release notes from the source release - Custom fields from the source release :::figure ![Screenshot of Octopus release page showing copied release with all existing package versions selected](/docs/img/releases/images/octopus-copied-release-with-package-versions-pre-selected.png) ::: Before saving the new release, you will need to update the release version. Attempting to save the release with a version that already exists will encounter validation errors. :::figure ![Screenshot of Octopus release page showing copied release with updated version for a single package](/docs/img/releases/images/octopus-copied-release-with-package-versions-updated.png) ::: ## Older versions - In versions earlier than **2026.1.4812**, there is no option to create a release from a previous release # Lifecycles Source: https://octopus.com/docs/releases/lifecycles.md [Getting Started - Lifecycles](https://www.youtube.com/watch?v=ofc-u61ukRA) Lifecycles give you control over the way releases of your software are promoted between your environments. You can also use them to automate deployments and set retention policies. Lifecycles are managed from the library page by navigating to **Deploy ➜ Lifecycles**: :::figure ![The lifecycles area of the Octopus Web Portal](/docs/img/shared-content/releases/images/lifecycles.png) ::: Octopus automatically creates a [default lifecycle](/docs/releases/lifecycles/#default-lifecycle) for you that contains a phase for each environment that you've created in Octopus Deploy. When you deploy your software it passes through the phases of the lifecycle in order. Lifecycles enable a number of advanced deployment workflow features: - **Control the order of promotion**: for example, to prevent a release from being deployed to *production* if it hasn't been deployed to *staging*. - **Automate deployment to specific environments**: for example, automatically deploy to *test* as soon as a release is created. - **Retention policies**: specify the number of releases to keep depending on how far they have progressed through the lifecycle. :::div{.hint} Lifecycles don't apply to [Runbooks](/docs/runbooks/). Learn more about the [differences between Runbooks and Deployments](/docs/runbooks/runbooks-vs-deployments). ::: ## Phases A phase represents a stage in your deployment lifecycle. You deploy to phases in order, and you can choose how releases move between phases. For example, you can configure a lifecycle so that there must be a successful deployment to the Development phase before you can proceed to the Testing phase. You can also have a completely optional phase. This allows you to release to an environment in the next phase without being required to deploy to any in the optional phase. Phases can also include multiple environments. This can be useful where you have more than one Testing environment. ### No phases When no phases are defined in a lifecycle, Octopus will use a default convention to control which environments may be deployed to, and in which order. The default convention forces releases to be deployed to each environment in the order that they are defined on the environments page. :::div{.warning} When you add a new environment to Octopus, it will automatically be included in the list of environments available to the default convention. To prevent Octopus from applying the default convention, define your own phases or restrict your lifecycle to specific environments. ::: ### Phases with environments It's possible to add one or multiple environments to a phase. When a phase has environments added to it, this defines which ones can be deployed to during this phase of the lifecycle. **Tenants and automatic-environments** When adding an environment to a phase, you can choose whether you want deployments to begin automatically once the release enters the phase, or if you want users to manually queue the deployment. For manual deployments, users can choose the desired tenants for deployment before the release begins (i.e. untenanted deployment, deployment to all tenants, or deployment to specific tenants). However, as deployments begin as soon as the release enters the next phase for environments set to automatic, users cannot choose which tenants to deploy to. Below outlines the behavior for a deployment to an automatic-environment: 1. If tenanted deployments are allowed, attempt to enqueue a new deployment for each tenant connected to the automatic-environment(s), taking the following into consideration: 1. Filter the tenants by any Tenant filter defined on the Channel for the Release being considered for deployment. 2. Further, filter the tenants based on promotion rules (e.g. deploy to UAT before Production for this tenant) 2. If untenanted deployments are allowed, attempt to enqueue the untenanted deployment to the automatic-environment(s). ### Phases without environments When a phase is defined without any environments added to it, this phase of the lifecycle will deploy to all the environments that haven't been *explicitly added* to the lifecycle in previous phases. :::div{.hint} Any future environments you define will also be deployed to as part of this phase of the Lifecycle. ::: ### Phases with priority When a phase is defined as a priority, deployments to the phase will be created as a priority task unless otherwise specified. When creating a deployment via UI, the priority checkbox will be selected by default. :::div{.info} From version `2025.2.7584`, **Priority Lifecycle Phase** and **Deployment with Priority** will require an Enterprise license. ::: ## Create a new lifecycle 1. From the Lifecycle page, click on the **ADD LIFECYCLE** button. 2. Give the Lifecycle a name and add a description. 3. Define the Retention Policy. Retention policies define how long releases are kept for, and how long extracted packages and files are kept on Tentacles. The default for both is to keep all. Learn more about [Retention Policies](/docs/administration/retention-policies). 4. Click **ADD PHASE** to define the phases of the lifecycle. 5. Give the phase a name. 6. Click **ADD ENVIRONMENT** to define which environments can be deployed to during this phase of the lifecycle. You can add one or multiple environments, or leave the default **Any Environments** option selected. Note, if you choose to use **Any Environments**, this phase of the Lifecycle will deploy to all the environments that haven't been explicitly added to the Lifecycle in previous phases. Any future environments you define will also be deployed to as part of this phase of the Lifecycle. 7. By default, users must manually queue the deployment to the environment, if you would like the deployment to occur automatically as soon as the release enters the phase, select *Deploy automatically...*. If you have a project set up with a [built-in package repository trigger](/docs/projects/project-triggers/built-in-package-repository-triggers) (formerly Automatic release creation) and set your first phase and environment to automatically deploy, pushing a package to the internal library will trigger both a release and a deployment to that environment. 8. Set the *Required to progress* option. This determines how many environments must be deployed to before the next phase can be activated. The options are: - **All must complete**. - **A minimum of x must complete**. If you choose this option, and, for example, have 5 environments in the phase and choose **2**, then 2 of the 5 environments must be deployed to before the next phase can be activated. - **Optional**. This lets you skip a phase when it is reached in the Lifecycle. This allows you to release to environments in the next phase without being required to deploy to _any_ in the optional phase. The standard Lifecycle progression rules apply that determine when an optional phase is deployable. Optional phases may be useful for scenarios such as the provision of a `Testing` phase that can optionally be deployed to, but isn't crucial to progressing on to `Production`. :::div{.warning} **Automatic deployments not evaluated for Optional phases** Optional phases do not execute automatic deployments. If you want to deploy releases automatically to any environments in a phase, use one of the other *Required to progress* options. ::: ![Optional Phase](/docs/img/releases/lifecycles/images/optional-phase.png) If you want to be able to deploy to any environment at any time, then simply create a single-phase that has **Required to progress** set to `All must complete` and includes all your environments. 9. Each phase of the Lifecycle can have its own retention policy defined. Set the retention policy for the phase if you don't want it to inherit the retention policy defined for the entire Lifecycle. 10. Add as many additional phases as you need. 11. Click **SAVE**. After you have defined your lifecycles, they become available to your projects. Projects can be deployed to any environment in their lifecycle. :::figure ![](/docs/img/releases/lifecycles/images/lifecycle-deployment-process.png) ::: ## Default lifecycle \{#default-lifecycle} Octopus creates a default lifecycle for you. To view it, navigate to **Deploy ➜ Manage ➜ Lifecycles**, and it will be in the list named **Default Lifecycle**: :::figure ![Default Lifecycle Library view](/docs/img/releases/lifecycles/images/default-lifecycle.png) ::: The phases shown are created implicitly by the default lifecycle. By convention, the default lifecycle will create one phase per environment. They appear in the same order the environments are listed on the environments page. To view the default conventions applied, click on the lifecycle and the information appears in the **Phases** section : :::figure ![Default Lifecycle Library view](/docs/img/releases/lifecycles/images/default-lifecycle-default-conventions.png) ::: ### Update the default lifecycle The default lifecycle handles most cases for small or straightforward configurations. When you create a new environment, it's automatically included in the default lifecycle. This also means that if you reorder the environments, the order of the phases will also change to match. These conventions can be helpful, but can sometimes lead to performance problems. :::div{.hint} Try to keep the number of environments in Octopus under ten. Having fewer environments keeps the number of phases in the default lifecycle low. ::: We recommend updating the default lifecycle to define the phases you need. This makes configuring and maintaining your Octopus Server easier. In the next section, we look at configuring the default lifecycle to add your own phases. ### Adding a phase to the default lifecycle You can define your own phases for the default lifecycle. This helps to prevent having too many phases being added automatically. To add a new phase, in the default lifecycle, Click **ADD PHASE**. Here, we are creating a phase named **Development** and adding the Dev environment to the phase: :::figure ![Add Dev lifecycle phase](/docs/img/releases/lifecycles/images/default-lifecycle-add-dev-phase.png) ::: This phase has the default option to manually deploy to the environment set. The Required to progress and Retention policy are also set to the default values. :::div{.hint} Phase names usually match the environment it contains. While this is a good practice, it's not a rule. ::: You can repeat this process to create extra phases. In this example, we are creating a phase for Testing, Staging, and Production. :::figure ![Default lifecycle phases added](/docs/img/releases/lifecycles/images/default-lifecycle-phases-added.png) ::: This allows you to explicitly configure the default lifecycle for deploying your software. ## Examples \{#lifecycle-examples} In this section, we cover some lifecycle examples and their included phases. ### Hotfix lifecycle A hotfix lifecycle is useful when you have a critical bug-fix that needs to be deployed quickly. In this scenario, lower environments such as Development and Testing are skipped. It's recommended to follow good deployment practices and validate any changes before pushing them to production. To match this, a hotfix lifecycle usually has just two phases, Staging and Production. Software with the bug fix is validated in Staging and then promoted to Production. Your lifecycle may be different to reflect how you decide to handle hotfixes. :::figure ![Hotfix lifecycle](/docs/img/releases/lifecycles/images/hotfix-lifecycle.png) ::: ### Maintenance lifecycle :::div{.success} **Octopus 2019.10** introduced [Runbooks](/docs/runbooks) as an alternative to having a maintenance lifecycle. They allow you to automate routine maintenance and emergency operations tasks. ::: A Maintenance lifecycle can be used for projects that run maintenance tasks such as backups or software upgrades. This lifecycle can be used for any tasks that you want to run regularly with the same benefits that Octopus provides for your application deployments. It typically consists of just one phase and one environment, also called Maintenance. You can include this environment in all deployment targets you want to run these tasks against. You can also split them up into the Development, Testing, Staging, and Production environments if you want to run the tasks for targets in those environments at different times. :::figure ![Maintenance lifecycle](/docs/img/releases/lifecycles/images/maintenance-lifecycle.png) ::: ## Removing lifecycles For projects using Config as Code, it's up to you to take care to avoid deleting any lifecycles required by your deployments. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. ## Recommendations \{#lifecycle-recommendations} When configuring your lifecycles, here are some tips to consider: - Update the default lifecycle to define the phases you need. This makes configuring and maintaining your Octopus Server easier. - Keep the number of environments under ten to keep the phases added by the default lifecycle low. - Create a lifecycle for any projects which need a different promotion flow between environments. Remember to define phases for the lifecycle. - Set specific retention policies for your lifecycles. This will prevent keeping releases and files forever, reducing disk and database usage. # Configuring Active Directory Federation Services Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-adfs.md You can use Active Directory Federation Services (AD FS) to authenticate when logging in to Octopus Server. To use AD FS authentication with Octopus, you will need to do the following: 1. Configure AD FS to trust your Octopus Deploy instance by setting it up as a client application in your AD FS configuration tool. 2. Optionally, configure the claims that will be sent back from AD FS to Octopus Deploy (for creating or locating the Octopus user). 3. Configure your Octopus Deploy instance to trust and use AD FS as an Identity Provider. ## Configure AD FS You need to configure AD FS to trust your instance of Octopus Deploy by configuring a client application in the AD FS configuration tool. :::div{.info} This guide assumes that you have Active Directory Federation Services installed and running on a server that you have administrative access to. For more information, see the [Microsoft documentation on AD FS](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/ad-fs-overview). ::: ### Registering a new client application 1. Log in to your AD FS server. Open the Start menu and search for "AD FS Management". :::figure ![AD FS Management](/docs/img/security/authentication/oidc-authentication/images/adfs-start-menu.png) ::: 2. Expand "AD FS", select "Application Groups", and choose "Add Application Group". Choose "Server application" and give the group a useful name (e.g., "Octopus Deploy"). :::figure ![Add an Application Group](/docs/img/security/authentication/oidc-authentication/images/adfs-add-application-group.png) ::: 3. Click "Next". This page configures the first application in the group. Please make a note of the **Client ID**; you will need it later. Give the application a useful name and description, then add `https://your-octopus-url/api/users/authenticatedToken/GenericOidc` as a **Redirect URI**. :::figure ![Add an Application](/docs/img/security/authentication/oidc-authentication/images/adfs-add-application.png) ::: 4. Click "Next". This page specifies the credentials that Octopus Deploy will use to authenticate with AD FS. Select "Generate a shared secret". Securely retain this **Client Secret**; you will need it later. :::figure ![Generate a shared secret](/docs/img/security/authentication/oidc-authentication/images/adfs-generate-shared-secret.png) ::: 5. Click "Next", confirm the information is correct, and click "Next" one last time. You should see a message indicating that the Application Group has been successfully created. ### Configuring claims By default, AD FS returns only the `userinfo` profile unless the `resource` indicator is provided in the authorization request. The ID token will contain limited information about the user, such as their email address, their `DOMAIN\username`, their SID, and a unique identifier. If you require more, you will need to customize the claims in the ID token by: 1. Creating a "Relying Party Trust" with a "Claims Issuance Policy" 2. Authorizing the client application that represents Octopus Deploy for the relying party trust, and 3. Specifying the identifier of the relying party trust as the **Resource** in the OpenID Connect configuration in Octopus Deploy #### Creating a Relying Party Trust :::div{.info} Microsoft defines ["Relying Party Trust" here](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/technical-reference/understanding-key-ad-fs-concepts). ::: 1. In "AD FS Management", expand "AD FS", select "Relying Party Trusts", right-click, and select "Add Relying Party Trust". 2. Choose "Claims aware" and click "Start". 3. Choose "Enter data about the relying party manually" and click "Next". 4. Specify a display name and click "Next". 5. Click "Next" to skip providing a token encryption certificate. 6. Click "Next" to skip providing optional protocol URLs. 7. Enter `https://your-octopus-url` as the "Relying party trust identifier". This must be an absolute URL, and should resolve to your Octopus Deploy instance. Click "Next". 8. Select a suitable access control policy for the users in Active Directory that will be allowed to sign in to Octopus Deploy, and click "Next" 9. Review the settings you have selected and click "Next" if they are correct. The relying party trust is now created. To customize the claims that will be sent to Octopus Deploy, you can leave "Configure claims issuance policy for this application" checked when you press close. #### Creating a Claims Issuance Policy This example will show how to include the user's full name in the `name` claim so that Octopus Deploy can set it as the user's display name. If you need to send other claims to Octopus Deploy, refer to the [AD FS documentation on configuring claims](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-claim-rules). 1. In "AD FS Management", expand "AD FS", and select "Relying Party Trusts". Right-click on the appropriate relying party trust and select "Edit Claim Issuance Policy". 2. Click "Add Rule..." 3. Choose "Send Claims Using a Custom Rule" and click "Next" 4. Name the rule "Send user display name" and paste the following text into the "Custom rule" box: ```text c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("name"), query = ";displayName;{0}", param = c.Value); ``` :::div{.info} The above text is AD FS [claim rule language](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/technical-reference/the-role-of-the-claim-rule-language). ::: 5. Click "Finish" ### Authorizing the client application for the relying party trust For Octopus Deploy to specify the `resource` indicator of the relying party trust and receive the customized claims, the client application representing Octopus Deploy needs to be permitted to use the relying party trust with the scope `allatclaims`. On the AD FS server, run the following PowerShell command: ```powershell Grant-AdfsApplicationPermission -ClientRoleIdentifier ClientId -ServerRoleIdentifier "https://your-octopus-url" -ScopeNames @("openid", "profile", "email", "allatclaims") ``` - `ServerRoleIdentifier` is the "Relying party trust identifier", and will be used for the **Resource** in the Octopus Deploy configuration. - `ClientRoleIdentifier` is the **Client ID** of the client application. You can double-check that it was applied correctly by running: ```powershell Get-AdfsApplicationPermission -ServerRoleIdentifier "https://your-octopus-url" ``` ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** is optional. If you have configured a [claims issuance policy](#creating-a-claims-issuance-policy) that returns group information, set this to be the name of the claim containing the group information. - **Username Claim Type** is optional. If you leave it unset, the `sub` claim will be used for the username, which is a unique identifier of the user. Some other options: - To use the user's email, set `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` - To use the user's `DOMAIN\username`, set `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` - To use the user's SID, set `sid` - **Resource** should be `https://your-octopus-url`. This is the relying party trust identifier that you assigned to the relying party trust. - **Scopes** should be left as the default of `openid profile email`. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer** should be a URL like `https://adfs.your-domain.com/adfs` - **Client ID** and **Client Secret** should be the values you noted when creating the client application. **Client Secret** can't be retrieved from AD FS - if you lose it, you will need to assign a new one. :::div{.hint} Note that the value of **Client Secret** cannot be retrieved once set - it can only be changed or deleted. ::: - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Troubleshooting If you are having difficulty configuring Octopus to authenticate with AD FS, check your [server logs](/docs/support/log-files) for warnings. ### Triple-check your configuration Unfortunately, the security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember, things are case-sensitive. - Remember to remove or add slash characters. ### Check the OpenID Connect metadata is working You can see the OpenID Connect metadata by going to the Issuer address in your browser and adding `/.well-known/openid-configuration` to the end. This would be something like `https://adfs.your-domain.com/adfs/.well-known/openid-configuration`. ### Contact Octopus Support If you aren't able to resolve the authentication problems yourself using these troubleshooting tips, please reach out to our [support team](https://octopus.com/support) with: 1. The contents of your OpenID Connect Metadata or the link to download it (see above). 2. A copy of the decoded payload for some security tokens (see above) may work as expected, and others will not. 3. A screenshot of the Octopus User Accounts, including their username, email address, and name. # Username and Password Source: https://octopus.com/docs/security/authentication/username-password.md :::div{.hint} Username and Password authentication can only be configured for Octopus Server. For [Octopus Cloud](/docs/octopus-cloud/), authentication using this provider is supported through [Octopus ID](/docs/security/authentication/octopusid-authentication/). See our [authentication provider compatibility](/docs/security/authentication/auth-provider-compatibility) section for further information. ::: Octopus provides a Username and Password authentication provider allowing you to create user accounts in Octopus manually without requirement for an external authentication provider. When Username and Password authentication is enabled, the sign in page for the Octopus Web Portal will present users with the option to sign in with an Octopus account: :::figure ![Username and Password login screen](/docs/img/security/authentication/images/username-password-login.png) ::: ## Security considerations Username and Password authentication has limited built-in security controls compared to [IdP-based](/docs/security/authentication#identity-provider-based-idp-authentication) or [directory-based](/docs/security/authentication#directory-based-authentication) authentication: - Passwords do not expire and there is no password history enforcement, so previously used passwords can be reused. - Accounts are locked for 10 minutes after 9 failed login attempts for the same username from the same IP address. The threshold and duration are not configurable and no administrative unlock is available. For environments that require password expiry, password history, or configurable lockout policies, we recommend using IdP or directory-based authentication instead. ## Enable username and password authentication via UI You can enable Username and Password authentication from the Octopus Web Portal by navigating to **Configuration ➜ Settings ➜ Username / Password**. From there you can click the **Is Enabled** checkbox to enable or disable the Username and Password provider. ![Username and Password settings](/docs/img/security/authentication/images/enable-username-password-1.png) ![Enable Username and Password checkbox](/docs/img/security/authentication/images/enable-username-password-2.png) The Username and Password provider will now be activated and available for Octopus users. ## Configuring username and password login Octopus Server can be configured to enable or disable username and password authentication via the command line, as follows: ```powershell Octopus.Server.exe configure --instance=[your_instance_name] --usernamePasswordIsEnabled=true ``` ## Managing user permissions When a new Octopus user is created, they are automatically added to the **Everyone** team. To manage Octopus users, this can be done by navigating to **Configuration ➜ Users**. :::figure ![Managing users](/docs/img/security/authentication/images/username-password-managing-users.png) ::: With any Octopus user, you can [assign user accounts to different teams](/docs/security/users-and-teams) to give them permissions to view projects or environments, or any additional permissions they may need: ![User permissions](/docs/img/security/authentication/images/username-password-user-permissions.png) # How to regenerate certificates with Octopus Server and Tentacle Source: https://octopus.com/docs/security/octopus-tentacle-communication/regenerate-certificates-with-octopus-server-and-tentacle.md :::div{.warning} By default, Octopus will use a 100-year, self-signed certificate for Octopus - Tentacle communication. It is unlikely you will need to follow this process outside of extenuating circumstances. If you are looking to update your SSL certificate for the Octopus web portal, please see [Updating the SSL Certificate of an existing web portal binding](https://octopus.com/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https#updating-the-ssl-certificate-of-an-existing-web-portal-binding). If in doubt, please reach out our [support team](https://octopus.com/support). ::: Octopus uses self-signed certificates to securely communicate between Tentacles and the Octopus Server. Prior to Octopus 3.14, the certificates were SHA1, and following the [shattered](https://octopus.com/blog/shattered) exploit, the certificate generation was upgraded to SHA256. This guide walks through the process of regenerating certificates to use the new algorithm. This is useful for upgrading from SHA1 to SHA256 and for rotating certificates. For more information on why Octopus uses self-signed certificates, please see the blog post [Why Octopus uses self-signed certificates](https://octopusdeploy.com/blog/why-self-signed-certificates). The algorithm used by the Server certificate can be viewed on the **Configuration ➜ Thumbprint** page. If the algorithm contains `sha1,` we recommend regenerating your certificate. :::div{.warning} **Updating an existing Octopus Server or Tentacle** It's important to consider the impact of updating an existing Octopus Server or Tentacle as changes are required to ensure each component trusts the other. If there is a mismatch between the certificate and the expected thumbprint, communication between the components will not be possible and must be resolved manually. Read the information below carefully. ::: ## Configuring Octopus Server to use a new certificate {#ConfiguringOctopusServerToUseANewCertificate} At a high level, changing the certificate on an Octopus Server involves the following steps: * Backup your existing certificate. * Generate a new certificate to a file. * Make the Tentacles trust the new certificate. * Replace the certificate on the Octopus Server. * Remove the old trusted certificate from the Tentacles. :::div{.hint} At present, this process is more manual than we would prefer, but we aim to improve it over time. ::: 1. Backup your existing certificate by executing the following statement at an elevated command line on the server, from the directory where Octopus Deploy is installed (`C:\Program Files\Octopus Deploy\Octopus` by default):
Windows ```powershell Octopus.Server.exe export-certificate --instance OctopusServer --export-pfx="C:\PathToCertificate\oldcert.pfx" --pfx-password MySecretPassword ```
Linux ```bash Linux ./Octopus.Server export-certificate --instance OctopusServer --export-pfx="/tmp/oldcert.pfx" --pfx-password MySecretPassword ```
This will display output similar to the following: ``` Checking the Octopus Master Key has been configured. ... Exporting certificate... The certificate has been written to C:\PathToCertificate\oldcert.pfx. ``` Save this certificate and the specified password somewhere secure. :::div{.hint} If you see a warning message about `The X509 certificate CN=Octopus Portal was loaded but the private key was not loaded.`, you are most likely not running with elevated permissions. ::: 2. Execute the following statement at the command line on the same server:
Windows ```powershell Octopus.Server.exe new-certificate --instance OctopusServer --export-pfx="C:\PathToCertificate\newcert.pfx" --pfx-password MySecretPassword ```
Linux ```bash ./Octopus.Server new-certificate --instance OctopusServer --export-pfx="/tmp/newcert.pfx" --pfx-password MySecretPassword ```
This will display output similar to the following: ``` Checking the Octopus Master Key has been configured. ... Generating certificate... The Octopus Server currently uses a certificate with thumbprint: 1111111111111111111111111111111111111111 A new certificate has been generated with thumbprint: 1234567890123456789012345678901234567890 The new certificate has been written to C:\PathToCertificate\newcert.pfx. ``` Take note of the new certificate's thumbprint (`1234567890123456789012345678901234567890` in the output above). We will use this thumbprint when we update the Tentacles to trust the new certificate. 3. The next step is to update all Tentacles to trust the new certificate. At present, this functionality is not exposed in the UI; it has to be done via the command line. On each Tentacle machine, execute the following command to trust the thumbprint of the newly-created certificate in the directory that the Tentacle agent is installed (Defaults: `C:\Program Files\OctopusDeploy\Tentacle\` and `/opt/octopus/tentacle/`):
Windows - Listening Tentacles ```powershell Tentacle.exe configure --trust="1234567890123456789012345678901234567890" ```
Windows - Polling Tentacles ```powershell Tentacle.exe update-trust --oldThumbprint "1111111111111111111111111111111111111111" --newThumbprint "1234567890123456789012345678901234567890" ``` This will display output similar to the following: ``` Updating Octopus servers thumbprint from 1111111111111111111111111111111111111111 to 1234567890123456789012345678901234567890 Finding existing Octopus Server registrations trusting the thumbprint 1111111111111111111111111111111111111111 and updating them to trust the thumbprint 1234567890123456789012345678901234567890: Updating polling tentacle https://your.octopus.app:10943/ 1111111111111111111111111111111111111111 - changing to trust 1234567890123456789012345678901234567890 These changes require a restart of the Tentacle. ```
Linux - Listening Tentacles ```bash ./Tentacle configure --trust="1234567890123456789012345678901234567890" ``` This will display output similar to the following: ``` Adding 1 trusted Octopus Servers These changes require a restart of the Tentacle. ```
Linux - Polling Tentacles ```bash ./Tentacle update-trust --oldThumbprint "1111111111111111111111111111111111111111" --newThumbprint "1234567890123456789012345678901234567890" ``` This will display output similar to the following: ``` Updating Octopus servers thumbprint from 1111111111111111111111111111111111111111 to 1234567890123456789012345678901234567890 Finding existing Octopus Server registrations trusting the thumbprint 1111111111111111111111111111111111111111 and updating them to trust the thumbprint 1234567890123456789012345678901234567890: Updating polling tentacle https://your.octopus.app:10943/ 1111111111111111111111111111111111111111 - changing to trust 1234567890123456789012345678901234567890 These changes require a restart of the Tentacle. ```
You will need to restart each Tentacle at this point:
Windows ```powershell tentacle.exe service --restart ```
Linux ```bash ./Tentacle service --restart ```
4. Now that the Tentacles all trust the new certificate, we can update the Octopus Server certificate to the new one we created earlier. In the command prompt on the Octopus Server run:
Windows ```powershell Octopus.Server.exe import-certificate --instance OctopusServer --from-file="C:\PathToCertificate\newcert.pfx" --pfx-password MySecretPassword Octopus.Server.exe service --instance OctopusServer --restart ```
Linux ```bash ./Octopus.Server import-certificate --instance OctopusServer --from-file="/tmp/newcert.pfx" --pfx-password MySecretPassword ./Octopus.Server service --instance OctopusServer --restart ```
This will display something like the following: ``` Importing the certificate stored in PFX file in C:\PathToCertificate\newcert.pfx using the provided password... Checking the Octopus Master Key has been configured. ... The certificate CN=Octopus Portal was updated; old thumbprint = 1111111111111111111111111111111111111111, new thumbprint = 1234567890123456789012345678901234567890 Certificate imported successfully. These changes require a restart of the Octopus Server. ``` 5. Run a healthcheck on the associated Tentacles and confirm they are all healthy. 6. Now we are trusting the new certificate, we can now stop the Listening Tentacles trusting the old certificate. On each of the Listening Tentacle machines run:
Windows ```powershell C:\Program Files\OctopusDeploy\Tentacle\Tentacle.exe configure --instance Tentacle --remove-trust C:\Program Files\OctopusDeploy\Tentacle\Tentacle.exe service --instance Tentacle --restart ```
Linux ```bash ./Tentacle configure --instance Tentacle --remove-trust ./Tentacle service --instance Tentacle --restart ```
7. Run a health check, and confirm all Tentacles are healthy. 8. Confirm on the **Configuration ➜ Thumbprint** page that the new certificate is using the `sha256` algorithm. ## Configuring a Tentacle to use a new certificate {#ConfiguringATentacleToUseANewCertificate} 1. To update the certificate that is used by a Tentacle, run the following commands on the Tentacle machine:
Windows ```powershell C:\Program Files\OctopusDeploy\Tentacle\Tentacle.exe new-certificate C:\Program Files\OctopusDeploy\Tentacle\Tentacle.exe service --restart ```
Linux ```bash ./Tentacle new-certificate ./Tentacle service --restart ```
2. After this is generated on the Tentacle, it can be updated on the Octopus Server. Navigate to **Infrastructure ➜ Deployment Targets** and update the Thumbprint for the updated target. # Service accounts Source: https://octopus.com/docs/security/users-and-teams/service-accounts.md When using Octopus Deploy it is common to have other automated services control certain aspects of your deployments. Some examples: - You might configure your [build server](/docs/octopus-rest-api) to push deployment packages to the built-in package feed, create releases, and deploy them to your test environment after each successful build. - You might be deploying to an [elastic environment](https://octopus.com/blog/rfc-cloud-and-infrastructure-automation-support) and want to add/remove deployment targets dynamically via the [Octopus API](/docs/octopus-rest-api). - You might have your own dashboard solution and want to get data directly from the [Octopus API](/docs/octopus-rest-api). It is best to create **Service accounts** for this purpose to provide each service with the least privileges required for the tasks each service will perform. :::div{.hint} **Service accounts** are **API-only accounts** that can be assigned permissions in the same way you do for normal user accounts, but are prevented from using the Octopus Web Portal. Service accounts authenticate with the Octopus API using [OpenID Connect](/docs/octopus-rest-api/openid-connect) or an [Octopus API Key](/docs/octopus-rest-api/how-to-create-an-api-key). ::: ## Creating a service account {#ServiceAccounts-CreatingAServiceAccount} [Getting Started - Service Accounts](https://www.youtube.com/watch?v=SMsZMpUwCZc) Creating a new Service account is very similar to creating a new User account: 1. Go to **Configuration ➜ Users** and click **Create user**. 2. Check **The user is a service account** to indicate this will be a Service account. 3. Enter a unique **Username** and **Display name** so you can distinguish this Service account. 4. Save the user to create the Service account. :::figure ![Create service account](/docs/img/security/users-and-teams/images/create-service-account.png) ::: :::div{.hint} This Service account is not very useful until it [belongs to one or more teams](/docs/security/users-and-teams/), and has one or more [OpenID Connect Identities](/docs/octopus-rest-api/openid-connect) or [Octopus API keys](/docs/octopus-rest-api/how-to-create-an-api-key) associated with it. ::: ## OpenID Connect (OIDC) You can use [OpenID Connect (OIDC)](/docs/octopus-rest-api/openid-connect) to automate Octopus with another service without needing to provision or manage API Keys. To do this you configure a specific *OIDC Identity* for the service which allows it to connect to Octopus securely. The service then exchanges an ID token with Octopus for a short-lived access token which it can then use for API requests. ## API Keys :::figure ![Service account API Key](/docs/img/security/users-and-teams/images/service-account-apikey.png) ::: Once you have created an [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key/) and [added this Service account to a team](/docs/security/users-and-teams), you can start using this Service account to automate Octopus with another service. ## Logins If you are using Active Directory there is also the option of using an Active Directory account's group membership to determine the service account's Team membership. To use this option all you need to do is add the Active Directory account as an external login entry for the service account. ![Add Active Directory login](/docs/img/security/users-and-teams/images/add-adlogin.png) # Copy the working directory Source: https://octopus.com/docs/support/copy-working-directory.md It can be frustrating when a deployment step isn't working as expected. Often the working directory is deleted before it is able to be inspected. A handy way to debug is by using the project variable `Octopus.Calamari.CopyWorkingDirectoryIncludingKeyTo`, which if set to a file-path will cause the [Calamari](/docs/octopus-rest-api/calamari) working directory to be copied to the configured location. The file-path location is local to the deployment target, so setting the value to c:\temp or #{Octopus.Agent.ProgramDirectoryPath}/#{Octopus.Release.Number} will copy the working directory to these folders on each of the targets. :::div{.warning} The copied directory will include a file which contains the secret one-time key passed to Calamari to decrypt the sensitive variables used in the deployment. This directory (or at least the `Variable.secret` file) should be deleted once no longer required. ::: This variable was created primarily for use by Octopus staff during development. We have documented it publicly as it has proven useful to our customers on occasion. Please use it only for debugging purposes. Do not rely on this behavior as part of your deployment process, as there is no guarantee it will not be removed in the future. # Upgrading major releases of Octopus Deploy Source: https://octopus.com/docs/administration/upgrading/guide/upgrading-major-releases.md A major release of Octopus Deploy is when the first number in the version is incremented. For example, 2020.x.x to 2021.x.x. Major releases require a bit more planning than minor or patch releases. This guide will help minimize risk. ## System Integrity Check Before performing any upgrade steps, we highly recommend performing a [System Integrity Check](/docs/administration/managing-infrastructure/diagnostics) on your live instance database. This is so we can check that the Database Schema is in the expected condition for the upgrade. If the integrity check passes, you are good to start the upgrade process. If it fails, please contact [support](https://octopus.com/support) with the [raw output of the task](/docs/support/get-the-raw-output-from-a-task), and we can get that fixed for you. ## Mitigate risk with a test instance Major release upgrades typically carry a major change, making rollbacks tricky. For example: - **2019.1.0**: Introduced spaces, with several API changes and a new team model. - **2020.1.0**: Deprecated support for SQL Server 2008 R2 and 2012. Except in rare cases, a standard in-place upgrade will work. However, there are other considerations, such as integrations, API scripts, and so on to consider. A test instance, be it a full clone of your main instance, or only a subset of your main instance, will allow you to test the upgrade itself, along with testing deployments and other integrations before upgrading your main instance. If anything goes wrong, you have time to fix it. In general, the process looks like this: 1. Create a test instance using the same version as your main instance. 1. Upgrade the test instance to the latest version of Octopus Deploy. 1. Verify the test instance works as expected along with testing integrations. 1. Upgrade the main instance. Learn more about [creating a test instance](/docs/administration/upgrading/guide/creating-test-instance). ## Prep work Before starting the upgrade, it is critical to back up the master key and license key. If anything goes wrong, you might need these keys to do a restore. It is better to have the backup and not need it than need the backup and not have it. The master key doesn't change, while your license key changes, at most, once a year. Back them up once to a secure location and move onto the standard upgrade process. 1. Backup the Master Key. 1. Backup the License Key. ### Backup the Octopus Master Key Octopus Deploy uses the Master Key to encrypt and decrypt sensitive values in the Octopus Deploy database. The Master Key is securely stored on the server, not in the database. If the VM hosting Octopus Deploy is somehow destroyed or deleted, the Master Key goes with it. To view the Master Key, you will need login permissions on the server hosting Octopus Deploy. Once logged in, open up the Octopus Manager and click the view master key button on the left menu. :::figure ![](/docs/img/shared-content/upgrade/images/view-master-key.png) ::: Save the Master Key to a secure location, such as a password manager or a secret manager. An alternative means of accessing the Master Key is to run the `Octopus.Server.exe show-master-key` from the command line. Please note: you will need to be running as an administrator to do that. :::figure ![](/docs/img/shared-content/upgrade/images/master-key-command-prompt.png) ::: ### Backup the License Key Like the Master Key, the License Key is necessary to restore an existing Octopus Deploy instance. You can access the License Key by going to **Configuration ➜ License**. If you cannot access your License Key, please contact our [support team](https://octopus.com/support) and they can help you recover it. ## Standard upgrade process The standard upgrade process is an in-place upgrade. In-place upgrades update the binaries in the install directory and update the database. The guide below includes additional steps to backup key components to make it easier to rollback in the unlikely event of a failure. ### Overview The steps for this are: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Backup the database. 1. Do an in-place upgrade. 1. Test the upgraded instance. 1. Disable maintenance mode. ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Upgrade High Availability In general, upgrading a high available instance of Octopus Deploy follows the same steps as a typical in-place upgrade. Download the latest MSI and install that. The key difference is to upgrade only one node first, as this will upgrade the database, then upgrade all the remaining nodes. :::div{.warning} Attempting to upgrade all nodes at the same time will most likely lead to deadlocks in the database. ::: The process should look something like this: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Stop all the nodes. 1. Backup the database. 1. Select one node to upgrade, wait until finished. 1. Upgrade all remaining nodes. 1. Start all remaining stopped nodes. 1. Test upgraded instance. 1. Disable maintenance mode. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ## Rollback failed upgrade While unlikely, an upgrade may fail. It could fail on a database upgrade script, SQL Server version is no longer supported, license check validation, or plain old bad luck. Depending on what failed, you have a decision to make. If the cloned instance upgrade failed, it might make sense to start all over again. Or, it might make sense to roll back to a previous version. In either case, if you decide to roll back, the process will be: 1. Restore the database backup. 1. Restore the folders. 1. Download and install the previously installed version of Octopus Deploy. 1. Do some sanity checks. 1. If maintenance mode is enabled, disable it. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Restore Octopus folders Octopus Deploy expects the artifacts, packages, tasklog, and event export folders to be in a specific format. The best chance of success is to: 1. Copy the existing folders to a safe location. 2. Delete the contents of the existing folders. 3. Copy the contents of the existing folders from the backup. 4. Once the rollback is complete, delete the copy from the first step. ### Find and download the previous version of Octopus Deploy Octopus Deploy stores the installation history in the database. Run this query on your Octopus Deploy database if unsure as to which version to download: ```sql SELECT TOP 5 [Version] FROM [dbo].[OctopusServerInstallationHistory] ORDER BY Installed desc ``` When you know the version to install, go to the [previous downloads page](https://octopus.com/downloads/previous). ### Installing the previous version The key configuration items, such as connection string, files, instance information, etc., are not stored in the install directory of Octopus Deploy. To install the previous version, first, uninstall Octopus Deploy. Uninstalling will only delete items from the install directory, or `C:\Program Files\Octopus Deploy\Octopus`. Then run the MSI to install the previous version. # Upgrading from Octopus 2.6.5 to 2018.10 LTS Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts.md :::div{.success} Please read our guide for [upgrading older versions of Octopus](/docs/administration/upgrading/legacy) before continuing. ::: This guide will walk you through the steps to upgrade from Octopus **2.6.5** to **2018.10 LTS**. This is the only supported upgrade path from Octopus 2.x and requires careful attention to detail. That being said, the vast majority of our customers have already upgraded using this battle hardened guide, so it should be a smooth experience if plan your upgrade and follow the steps carefully. ## Planning your upgrade There are two main parts to the upgrade: 1. **Install the Octopus Server and migrate your data** - we changed our data persistence to use Microsoft SQL Server so we need to migrate the data from your existing Octopus Server to the new database. 1. **Upgrade all your Tentacles** - we changed the communications protocol significantly, but we've provided an easy way for you to upgrade all your Tentacles using your existing Octopus Server. See [below](#tentacles). We recommend choosing from two different approaches for upgrading from **Octopus 2.6.5**: - Create a new Octopus Server and migrate to it. We recommend this approach. - Install over the top of your existing Octopus Server. ### Approach 1: Install the new version of Octopus on a new server, and migrate to it (recommended) If you are able to provision a new Octopus Server, this is the safest option. That way, if something goes wrong in the upgrade, it will be easy to discard the new server and start the process again. And when it works, you can decommission the old Octopus Server. Read the full guide: [Upgrade with a new Server instance](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/upgrade-with-a-new-server-instance) ### Approach 2: In-place (Over the Top) upgrade of an existing server It is possible to install newer versions of Octopus over the top of a **Octopus 2.6** instance. You'll upgrade the Tentacles, then upgrade the Octopus Server. Read the full guide: [In place (over the top) upgrade](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/in-place-upgrade-install-over-2.6.5) ## Upgrading your existing Tentacles {#tentacles} We have significantly change the communications protocol used by Tentacle. This means your 2.6 Tentacles won't be able to communicate to your new Octopus Server. Likewise, new Tentacles won't be able to communicate with your old Octopus Server. Once you upgrade, going back can be difficult. Please take time to plan your upgrade carefully using this guide. ### Small number of Tentacles > "I have an Octopus Server and a handful of Tentacles. I don't mind manually running the new Tentacle MSI's on each of my Tentacle machines." If you only have a small number of Tentacles, it's easiest to just download the new Octopus and Tentacle MSI's and install them manually. Read the full guide: [Manual upgrades for smaller instances](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/manual-upgrade) ### Lots of Tentacles > "I have lots of Tentacles; there's no way I'm manually updating them all!" Don't worry, we've got you covered! We build a tool called **Hydra** to help you upgrade all your Tentacles during the upgrade process. :::div{.warning} Please pay careful attention to the instructions in these guides; if you skip ahead and do the upgrade in the wrong order, you might be stuck upgrading all Tentacles manually! ::: # Migrating data from Octopus 2.6.5 to 2018.10 LTS Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/migrating-data-from-octopus-2.6.5-2018.10lts.md When upgrading from **Octopus 2.6** to **Octopus 2018.10 LTS** you can migrate your data. There are some points worth noting about the data migration process: - The data migration tool has been designed to perform a **one-time** migration from **Octopus 2.6** to **Octopus 2018.10 LTS** for each backup file. * Re-running the data migration will overwrite matching data. See [Importing](/docs/administration/data/data-migration) in the Data Migration page for more details on how data is imported. * Data is matched on name. Names are unique in Octopus. This is to allow multiple backups to be run from multiple Octopus Server instances to combine into one **Octopus 2018.10 LTS** instance. So you can run multiple backup files into an **Octopus 2018.10 LTS** instance but if it matches names it will use the currently running backup as the source of truth. - The built-in Octopus NuGet package repository is not migrated automatically - see [below](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/migrating-data-from-octopus-2.6.5-2018.10lts) for more details. - You can optionally limit the days of historical data to migrate - see [below](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/migrating-data-from-octopus-2.6.5-2018.10lts) for more details. :::div{.hint} **The migrator can take a long time** Please see our [tips for minimizing the migration duration](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/minimize-migration-time). ::: ## Importing your 2.6 backup into 2018.10 LTS To import your 2.6 Raven data into a 2018.10 LTS installation (generally this is run after a side-by-side upgrade) you need to select import from the Octopus Manager. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3964992.png) ::: This will open up the importer. From here you select that you want to import from a 2.6 backup file. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3964993.png) ::: You need to select your most recent 2.6 Backup file, and provide the Master Key associated with the backup you are importing. The next step lets you perform a preview of your import. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3964994.png) ::: When you deselect ***Preview only***, your import will run against the database. This cannot be reversed. The backup is treated as the truth, so any changes that have been made to the database (if this is not your first import) will be overwritten with the backup. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3964995.png) ::: If you need to use any of the options below to manage the data being imported you need to use the Show Script feature to run the migration via console. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3964996.png) ::: ### Migrating the built-in Octopus NuGet package repository If you use the built-in [Octopus NuGet repository](/docs/packaging-applications/package-repositories) you will need to move the files from your 2.6 server to your 2018.10 LTS server. The package files are not included as part of the backup. In a standard **Octopus 2.6** install the files can be found under `C:\Octopus\OctopusServer\Repository\Packages` You will need to transfer them to the new server to `C:\Octopus\Packages`. Once the files have been copied, go to **Library ➜ Packages ➜ Package Indexing** and click the `RE-INDEX NOW` button. This process runs in the background, so if you have a lot of packages it could take a while (5-20 mins) to show in the UI or be usable for deployments. ### Migrating historical data By default, we migrate everything from your backup including all historical data. Learn about [minimizing migration time](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/minimize-migration-time). # Upgrading from Octopus 2.x to 2.6.5 Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.x-2.6.5.md :::div{.success} Please read our guide for [upgrading older versions of Octopus](/docs/administration/upgrading/legacy) before continuing. ::: Upgrading **Octopus 2.0** involves two major steps. - Upgrading the Octopus Server. - Upgrading Tentacles. Additional information on troubleshooting upgrades is below. ## Upgrading the Octopus Server To upgrade the Octopus Server, you will need to follow these steps: 1. Ensure you have a recent [database backup](/docs/administration/data/backup-and-restore) that you can restore in case anything goes wrong. 2. Download the [Octopus Deploy 2.6.5 Installer](https://octopus.com/downloads/2.6.5). 3. Run the installer and follow the prompts. :::div{.problem} **Changing installation paths** If you change the Octopus Server installation path (e.g. *C:\Program Files\Octopus Deploy\Server*) between upgrades, you will need to reconfigure the Windows service after the installer completes. In the Octopus Server Manager, choose the "Reinstall" button to the right of the service status. ::: When the installer finishes, Octopus Manager will appear. Make sure the Octopus service is running by clicking **Start**. :::figure ![](/docs/img/administration/upgrading/legacy/images/3277991.png) ::: ## Upgrading Tentacles After upgrading the Octopus Server, browse to the **Environments** tab in the Octopus Web Portal. You may need to press the "Check health" button to refresh the status of your Tentacles. If any of the Tentacle agents need to be updated, a message will appear: :::figure ![](/docs/img/administration/upgrading/legacy/images/3277990.png) ::: Click on the **Upgrade machines** button to have Octopus send the new Tentacle package to all machines. ## Troubleshooting When **Octopus 2.0** was first released, the MSI was set as a "per user" install. This means that if Joe installed Octopus, Mary would not see the start menu entries. For **Octopus 2.1**, we fixed the MSI and made it a "per machine" installation. However, this created one problem: when you install a new version of Octopus, we normally uninstall the old version. But a "per machine" installation cannot automatically uninstall a "per user" MSI. Instead, we added a check in **Octopus 2.1.3** that checks if a per-user installation already exists, and if so, blocks installation. The error message reads: > A previous version of **Octopus 2.0** is currently installed. This version cannot be automatically upgraded. You will need to uninstall this version before upgrading. Please view this page for details: [https://oc.to/UninstallFirst](https://oc.to/UninstallFirst) :::figure ![](/docs/img/administration/upgrading/legacy/images/3278002.png) ::: ### Uninstall Octopus 2.0 :::div{.success} **Your data is safe** Uninstalling the old Octopus MSI only removes the program files from disk and stops the Windows Service; your configuration files and the Octopus database will not be touched. When you install the new version, it will continue to work. When upgrading from one version of Octopus to another we actually perform an uninstall of the old version and then install the new version; the only difference in this case is that due to limitations in Windows Installer/WiX, we can't easily locate the per-user installation. ::: You can uninstall the old version of the Octopus Deploy MSI installer and install the new version by locating the entry in **Programs and Features** in the Windows Control Panel: :::figure ![](/docs/img/administration/upgrading/legacy/images/3278003.png) ::: After you have uninstalled the old version of Octopus, you can install the new version. ### If you are still getting this error After uninstalling the old version of Octopus and restarting, if you still receive this error, please navigate to the following registry keys: ``` HKEY_LOCAL_MACHINE\Software\Octopus\OctopusServer HKEY_LOCAL_MACHINE\Software\Octopus\Tentacle ``` And delete the `InstallLocation` value. Depending on whether you are running the 32-bit registry editor or had previously installed 32-bit versions of Octopus on a 64-bit machine, you should also check: ``` HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Octopus\OctopusServer HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Octopus\Tentacle ``` Quick PowerShell way to do this: ``` $RegServer = 'HKLM:\Software\Octopus\OctopusServer' $RegTentacle = 'HKLM:\Software\Octopus\Tentacle' $RegServer64 = 'HKLM:\SOFTWARE\Wow6432Node\Octopus\OctopusServer' $RegTentacle64 = 'HKLM:\SOFTWARE\Wow6432Node\Octopus\Tentacle' $Entries = @($RegServer,$RegTentacle, $RegServer64, $RegTentacle64) foreach ($reg in $Entries) { if (Test-Path $reg) { Remove-ItemProperty -Path $reg -Name InstallLocation } } ``` # Managing Octopus with code Source: https://octopus.com/docs/best-practices/platform-engineering/levels-of-responsibility.md There are three levels of responsibility that platform engineering teams can implement when managing downstream projects and spaces: * Customer responsibility (eventual inconsistency) * Shared responsibility (eventual consistency) * Centralized responsibility (enforced consistency) ## Customer responsibility model The customer responsibility model allows platform engineering teams to provision a space or project, but once those resources are created, the customer assumes full control. The customer is free to edit these resources as they see fit, but the platform team will not push any further updates. This responsibility model is like providing a template PowerPoint presentation. People can copy the template and build their own presentations, but any updates to the original template are not propagated to the copies. This is also called the eventual inconsistency model because the upstream and downstream projects and spaces are expected to drift over time. ![Customer Responsibility model](/docs/img/platform-engineering/customer-responsibility-model.png) ## Shared responsibility model The shared responsibility model relies on Git based workflows to merge changes between forked Git repositories backing Config-as-Code (CaC) projects. Because the two CaC repos are forks of each other, they share the same Git history, and processes like Git merges can be used to synchronize changes between these repositories over time. This is also called the eventual consistency model because the upstream and downstream artifacts are expected to drift but have the option to incorporate any important changes. ![Shared Responsibility model](/docs/img/platform-engineering/shared-responsibility-model.png) ## Centralized responsibility model The centralized responsibility model provides mostly read-only projects and spaces to customers. Customers can create and deploy releases, but are restricted from editing any settings. This model makes it easy to push out new changes because the platform team knows the state of all the downstream resources. This is also called the enforced consistency model because customers have little ability to edit projects or spaces. ![Shared Responsibility model](/docs/img/platform-engineering/central-responsibility-model.png) ## Further reading The chapter "Platform Engineering Responsibility Models" from the book [DevEx as a Service with Platform Engineering](https://github.com/OctopusSolutionsEngineering/PlatformEngineeringBook/) discusses the responsibility models in greater detail, with recommendations on when to use one model over another, and the advantages and disadvantages of each. # Java error messages and troubleshooting Source: https://octopus.com/docs/deployments/java/error-messages.md The Java deployment steps include a number of unique error codes that may be displayed in the output if there was an error. Below is a list of the errors, along with any additional troubleshooting steps that can be taken to rectify them. ## WILDFLY-DEPLOY-ERROR-0001 There was an error taking a snapshot of the current configuration. ## WILDFLY-DEPLOY-ERROR-0002 There was an error deploying the artifact. ## WILDFLY-DEPLOY-ERROR-0003 There was an error reading the existing deployments. ## WILDFLY-DEPLOY-ERROR-0004 There was an error adding the package to the server group. ## WILDFLY-DEPLOY-ERROR-0005 There was an error deploying the package to the server group. This may be due to duplicate context paths. Check that the context path is not already assigned to an existing application. See [Defining Context Paths](#context_path) for more information on how context paths are assigned in WildFly. This may also occur if invalid server group names where supplied when deploying to a domain controller. Look for entries like this in the verbose log output: ``` INFO: Result as JSON: { "outcome" : "failed", "failure-description" : "WFLYCTL0216: Management resource '[(\"server-group\" => \"invalid-server-group-name\")]' not found", "rolled-back" : true } ``` ## WILDFLY-DEPLOY-ERROR-0006 There was an error undeploying the package to the server group ## WILDFLY-DEPLOY-ERROR-0007 There was an error deploying the package to the standalone server. This may be due to duplicate context paths. Check that the context path is not already assigned to an existing application. See [Defining Context Paths](#context_path) for more information on how context paths are assigned in WildFly. This may also be caused by an error that prevents the application being deployed from starting up. Check the application server logs for more information. ## WILDFLY-DEPLOY-ERROR-0008 There was an error enabling the package in the standalone server ## WILDFLY-DEPLOY-ERROR-0009 There was an error logging into the management API. Ensure the server has started and that the ip/hostname and port details are correct. Make sure the credentials are correct. ## WILDFLY-DEPLOY-ERROR-0010 There was an error logging out of the management API ## WILDFLY-DEPLOY-ERROR-0011 There was an error terminating the CLI object ## WILDFLY-DEPLOY-ERROR-0012 There was an error changing the deployed state of the application. Make sure the application name is correct. ## WILDFLY-DEPLOY-ERROR-0013 The login was not completed in a reasonable amount of time. This can happen if no credentials where supplied with the step, and silent authentication failed. Either supply credentials to be used, or ensure that the user performing the deployment (the Tentacle service user in Windows or the SSH user in Linux and MacOS) has access to the application server `$JBOSS_HOME/standalone/tmp/auth` or `$JBOSS_HOME/domain/tmp/auth` directory. Also ensure that the hostname and port are correct. You should be able to open the admin console using these details. :::figure ![Wildfly Admin Console](/docs/img/deployments/java/wildfly-admin-console.png) ::: ## WILDFLY-DEPLOY-ERROR-0014 An exception was thrown during the deployment. ## WILDFLY-DEPLOY-ERROR-0015 Failed to deploy the package to the WildFly/EAP standalone instance ## WILDFLY-DEPLOY-ERROR-0016 Failed to deploy the package to the WildFly/EAP domain ## WILDFLY-DEPLOY-ERROR-0017 There was a mismatch between the server type defined in the Octopus Deploy step and the server that was being deployed to. For example, the Octopus Deploy step defined the server as `Standalone` in the `Standalone or Domain Server` field, but the server was actually a domain controller. This error won't stop the deployment, but possibly means that the Octopus Step has not configured the correct fields. ## TOMCAT-DEPLOY-ERROR-0001 There was an error deploying the package to Tomcat ## TOMCAT-DEPLOY-ERROR-0002 There was an error deploying a tagged package to Tomcat ## TOMCAT-DEPLOY-ERROR-0003 There was an error undeploying a package from Tomcat ## TOMCAT-DEPLOY-ERROR-0004 There was an error enabling or disabling a package in Tomcat ## TOMCAT-DEPLOY-ERROR-0005 This is a catch all error message for unexpected errors during a Tomcat deployment. Ensure that: * The manager URL is correct. Ensure the URL includes the context of the manager application, and that the port and hostname/IP address are correct. Also ensure that the hostname/IP address can be resolved from the target machine hosting the Tentacle. A common example of a correct manager URL is `http://localhost:8080/manager`. * The Tomcat credentials are correct, and that the Tomcat user has been granted the `manager-script` role. * The firewall allows connection to the Tomcat server. * Tomcat is started and running. If you see errors such as: ``` 23:22:33 Error | TOMCAT-DEPLOY-ERROR-0005: An exception was thrown during the deployment. https://oc.to/JavaAppDeploy#tomcat-deploy-error-0005 23:22:33 Error | org.apache.http.conn.HttpHostConnectException: Connect to tomcat-server:8080 [tomcat-server/127.0.1.1] failed: Connection refused ``` Then ensure that the IP address of the tomcat server (`127.0.1.1` in this example, as found in the list `[tomcat-server/127.0.1.1]`) is valid. If not, there may be a DNS issue. It may also be the case that the tomcat process running does not have the necessary permissions to modify the filesystem. Ensure that the process is running with the correct privileges. ## TOMCAT-DEPLOY-ERROR-0006 A HTTP return code indicated that the login failed due to bad credentials. Make sure the username and password are correct. ## TOMCAT-DEPLOY-ERROR-0007 A HTTP return code indicated that the login failed due to invalid group membership. Make sure the user is part of the `manager-script` group in the `tomcat-users.xml` file. See the [Tomcat documentation](https://tomcat.apache.org/tomcat-7.0-doc/manager-howto.html#Configuring_Manager_Application_Access) for more details on the groups used by the manager application. ## TOMCAT-DEPLOY-ERROR-0008 The application was not successfully started or stopped. This can happen if the application failed to initialize. Check the Tomcat logs for information on why the application could not be started. Also confirm that the context path and version match a deployed application. This is treated as a warning during deployment, but an error if encountered during the Tomcat start/stop step. ## JAVA-DEPLOY-ERROR-0001 The `Deploy a package` step was used with an unsupported package. This step does not support specialized file formats, like those used with Java packages. You may want to use a step like `Deploy Java Archive` instead. ## TOMCAT-HTTPS-ERROR-0001 ## TOMCAT-HTTPS-ERROR-0002 ## TOMCAT-HTTPS-ERROR-0003 You have attempted to deploy a certificate using a protocol that is not supported by the installed version of Tomcat. ## TOMCAT-HTTPS-ERROR-0004 ## TOMCAT-HTTPS-ERROR-0005 ## TOMCAT-HTTPS-ERROR-0006 You have attempted to add an additional certificate to an existing `` configuration in Tomcat 8.5 and above, or overwrite an existing `` configuration, where the new protocol does not match the existing protocol. For example the configuration already defines a `` with the NIO protocol, and you are attempting to add a certificate with the APR protocol. This is not supported as changing the protocol may leave existing configurations in an invalid state. This error may also be thrown if a certificate is being added to an existing `` that does not define the `protocol` attribute. Tomcat will auto-switch between APR and NIO if the `protocol` attribute is not set, but Octopus requires a fixed implementation to be defined before it can deploy a certificate. To solve this problem, either deploy the certificate using the same protocol that is already configured in the ``, manually remove the existing `` and redeploy the certificate via Octopus, or manually configure the `` to use the new protocol and then deploy the certificate into it with Octopus. ## TOMCAT-HTTPS-ERROR-0007 Tomcat 8.5 and above do not support the BIO protocol. ## TOMCAT-HTTPS-ERROR-0008: If we have an existing configuration like this: ```xml ``` then this certificate configuration is assumed to have the hostName of `myHostName`, because it is derived from the `defaultSSLHostConfigName` attribute. At this point trying to add another default `` element will fail. For example, this is not a valid configuration: ```xml ``` The above will throw an error about having duplicate default configurations. The error `TOMCAT-HTTPS-ERROR-0008` means Octopus prevented a certificate deployment that would lead to this invalid configuration. You can fix this error by not deploying the new certificate as the default, or by manually moving the certificate configuration from the `` element into a `` before deploying another certificate with Octopus. ## TOMCAT-HTTPS-ERROR-0009 Tomcat 7.0 does not support the Non-Blocking IO 2 Connector ## TOMCAT-HTTPS-ERROR-0010 The `server.xml` file could not be found. When the `CATALINA_BASE` location is defined, `server.xml` is expected to be found at `$CATALINA_BASE/conf/server.xml`. When the `CATALINA_BASE` location is not defined, `server.xml` is expected to be found at `CATALINA_HOME/conf/server.xml`. Ensure that the `CATALINA_BASE` directory is valid (if it is defined) and that the user account performing the deployment (i.e. the Tentacle service or the SSH user) has permissions to access the `server.xml` file. ## TOMCAT-HTTPS-ERROR-0011 Failed to extract the version number from the information supplied. ## TOMCAT-HTTPS-ERROR-0012 Failed to generate a unique file. ## TOMCAT-HTTPS-ERROR-0013 The server.xml file was not valid XML, or was not accessible. Check to make sure that the user running the Octopus Tentacle in Windows or the SSH user in Linux/MacOS has permissions to read the server.xml file. ## TOMCAT-HTTPS-ERROR-0014 Failed to save the server.xml file. Check to make sure that the user running the Octopus Tentacle in Windows or the SSH user in Linux/MacOS has permissions to write to the server.xml file. ## TOMCAT-HTTPS-ERROR-0016 The private key could not be created. Check to make sure that the user running the Octopus Tentacle in Windows or the SSH user in Linux/MacOS has permissions to create files in the Tomcat `conf` directory. ## TOMCAT-HTTPS-ERROR-0017 The public key could not be created. Check to make sure that the user running the Octopus Tentacle in Windows or the SSH user in Linux/MacOS has permissions to create files in the Tomcat `conf` directory. ## TOMCAT-HTTPS-ERROR-0018 Failed to find the `lib/catalina.jar` file in the Tomcat directory. Make sure the Tomcat installation path is correct. Also check to make sure that the user running the Octopus Tentacle in Windows or the SSH user in Linux/MacOS has permissions list the Tomcat `lib` directory, and has read access to the `lib/catalina.jar` file. ## TOMCAT-HTTPS-ERROR-0019 The path defined to hold the keys does not exist. ## TOMCAT-HTTPS-ERROR-0020 The keystore, private key or public key filename must be an absolute path if it is specified. ## WILDFLY-HTTPS-ERROR-0001 An exception was thrown during the HTTPS configuration. ## WILDFLY-HTTPS-ERROR-0004 There was an error configuring the Elytron server SSL context ## WILDFLY-HTTPS-ERROR-0005 There was an error removing the legacy security realm, or an error creating the keystore file. Check for an error like `java.io.FileNotFoundException: /opt/wildfly/standalone/configuration/Internet_Widgets_Pty_Ltd1.keystore (Permission denied) `, and ensure the Tentacle user account has the correct permissions to create the keystore file. ## WILDFLY-HTTPS-ERROR-0006 There was an error adding the Elytron security context. ## WILDFLY-HTTPS-ERROR-0007 There was an error with the batched operation to remove the legacy security realm and add the Elytron security context. ## WILDFLY-HTTPS-ERROR-0008 There was an error reloading the server. ## WILDFLY-HTTPS-ERROR-0009 There was an error adding the Elytron key store. ## WILDFLY-HTTPS-ERROR-0010 There was an error configuring the Elytron key store. ## WILDFLY-HTTPS-ERROR-0011 There was an error adding the Elytron key manager. ## WILDFLY-HTTPS-ERROR-0012 There was an error configuring the Elytron key manager. ## WILDFLY-HTTPS-ERROR-0013 There was an error adding the Elytron server ssl context. ## WILDFLY-HTTPS-ERROR-0014 There was an error configuring the Elytron server ssl context. ## WILDFLY-HTTPS-ERROR-0015 There was an error reading the app server config path. ## WILDFLY-HTTPS-ERROR-0017 Configuring a keystore requires that the keystore name be defined. ## WILDFLY-HTTPS-ERROR-0018 A required property was not defined. ## WILDFLY-HTTPS-ERROR-0019 The server being configured did not match the type of server (either standalone or domain) defined in the step. ## WILDFLY-HTTPS-ERROR-0020 There was an error adding the security realm. ## WILDFLY-HTTPS-ERROR-0021 There was an error adding the keystore to the security realm. ## WILDFLY-HTTPS-ERROR-0022 There was an error configuring the existing keystore information in the security realm. ## WILDFLY-HTTPS-ERROR-0023 There was an error getting the undertow servers. ## WILDFLY-HTTPS-ERROR-0024 There was an error adding a new https listener in undertow. This can happen if the application server fails to start an existing https listener. Check the log files for messages like: ``` No SSL Context available from security realm 'realmname'. Either the realm is not configured for SSL, or the server has not been reloaded since the SSL config was added. ``` ## WILDFLY-HTTPS-ERROR-0025 There was an error configuring the existing https listener. ## WILDFLY-HTTPS-ERROR-0026 Failed to get the default interface for socket group. ## WILDFLY-HTTPS-ERROR-0027 Failed to get the https socket binding. ## WILDFLY-HTTPS-ERROR-0028 Failed to get socket binding for standalone. ## WILDFLY-HTTPS-ERROR-0029 There was an error adding a new https connector in the web subsystem. ## WILDFLY-HTTPS-ERROR-0030 There was an error configuring the existing https connector. ## WILDFLY-HTTPS-ERROR-0031 Failed to get socket binding for host. ## WILDFLY-HTTPS-ERROR-0032 Failed to get slave hosts. ## WILDFLY-HTTPS-ERROR-0033 Failed to get master hosts. ## WILDFLY-HTTPS-ERROR-0034 Failed to get master hosts. ## WILDFLY-HTTPS-ERROR-0035 Failed to get servers for host. ## WILDFLY-HTTPS-ERROR-0036 Failed to save legacy web subsystem https connector as a batch operation. ## WILDFLY-HTTPS-ERROR-0037 A supplied profile did not exist in the domain. ## WILDFLY-HTTPS-ERROR-0038 The server is not in a running state. ## WILDFLY-HTTPS-ERROR-0039 Failed to find either web or undertow subsystems. This means that Calamari has tried to find either the web or undertow subsystem to determine how the certificate is to be configured, and neither could be found. This probably means the server is still starting up and is not responding to the read-resource queries. ## WILDFLY-HTTPS-ERROR-0040 Failed to load any extensions. ## WILDFLY-HTTPS-ERROR-0041 The keystore filename must be an absolute path if it is specified. For example, you may have entered a value like `my.store` as the `Keystore Filename`. This value is required to be a path like `C:\my.store` or `/opt/my.store`. ## WILDFLY-HTTPS-ERROR-0042 When the keystore is not relative to a path, it must be absolute. ## WILDFLY-HTTPS-ERROR-0043 When the keystore is relative to a path, it must not absolute. ## WILDFLY-ERROR-0001 There was an error entering batch mode. ## WILDFLY-ERROR-0002 There was an error running the batch. ## JAVA-HTTPS-ERROR-0001 Certificate file does not contain any certificates. This is probably because the input certificate file is invalid. ## JAVA-HTTPS-ERROR-0002 Could not find a private key. This is probably because the input key file is invalid. ## JAVA-HTTPS-ERROR-0003 The path supplied as the location of a unique file was not a directory. ## JAVA-HTTPS-ERROR-0004 The path supplied as the location of a unique file does not exist. ## JAVA-HTTPS-ERROR-0005 Failed to create the keystore file. Ensure that the user running the Tentacle service for a Windows target or the SSH account Octopus uses to connect to the Linux or MacOS target has permissions to create a new file, or overwrite the existing file, at the configured path. ## KEYSTORE-ERROR-0001 An exception was thrown during the deployment of the Java keystore. ## KEYSTORE-ERROR-0002 The keystoreName and defaultCertificateLocation both can not be blank. ## KEYSTORE-ERROR-0003 The keystore filename must be an absolute path if it is specified. For example, you may have entered a value like `my.store` as the `Keystore Filename`. This value is required to be a path like `C:\my.store` or `/opt/my.store`. ## KEYSTORE-ERROR-0004 The keystore filename must be supplied. # Cleaning up Environments Source: https://octopus.com/docs/deployments/patterns/elastic-and-transient-environments/cleaning-up-environments.md Octopus can automatically remove unwanted machines from environments based on their health status. This is useful when an environment is scaled down and orphaned deployment targets remain in Octopus. Automatic environment clean up can be configured through machine policies. ## Machine policies Machine policies are machine related settings that can be applied per-machine. They can be accessed at **Infrastructure ➜ Machine policies**. In this example we will create a machine policy to automatically delete machines when they become unavailable. ## Creating a machine policy for environment cleanup 1. Navigate to the *Machine policies* screen. 2. Create a new machine policy by selecting **Add machine policy**: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/creating-machine-policy.png) ::: 3. Name the machine policy "Clean up machines". 4. Change the setting "Clean up unavailable machines" to "Automatically delete unavailable machines". By selecting this option and setting the time to 0, any machines that fail a health check and become unavailable will be deleted: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/cleanup-setting.png) ::: 5. Save the machine policy. 6. Assign the machine policy to a machine by selecting a machine and using the *Policy* drop down to select the machine policy: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/assign-to-machine.png) ::: 7. Turn the machine off and run a health check. Machine deletion happens as part of health checks. :::div{.hint} Read more about [machine policies](/docs/infrastructure/deployment-targets/machine-policies) ::: ## Troubleshooting automatic environment clean up Machine clean up is part of health checks and machine clean up logs are not stored. Machine clean up logging is written to the log of the health check task that performed the deletion. Audit events recording the automatic clean up of machines can be accessed via the **Configuration ➜ Diagnostics** page by selecting **Machine clean up events**, which redirects to the audit log of automatic machine removals. :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/deletion-audit.png) ::: ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns). # Installing the Tentacle VM extension via the classic Azure Portal Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-the-classic-azure-portal.md :::div{.problem} The VM extension is deprecated and no longer supported. All customers using the VM extension should migrate to [DSC](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). ::: The Azure VM Extension cannot be installed via the Classic Azure Portal, as it lacks support for adding extensions. We recommend either using the new Azure Portal, or using the [CLI](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-the-azure-cli/) or [PowerShell](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-powershell) methods. For further information, please see the [Microsoft documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/classic/manage-extensions?toc=%2fazure%2fvirtual-machines%2fwindows%2fclassic%2ftoc.json). # Getting started Source: https://octopus.com/docs/octopus-ai/assistant/getting-started.md ## Prerequisites - An Octopus instance (cloud or on-premises with [public accessibility](#using-with-on-premises-instances-or-cloud-instance-allow-lists)) - A Chromium-based browser (Chrome, Brave, Edge, etc.) that supports Chrome Web Store extensions ## Installation 1. Install the [Octopus AI Assistant Chrome extension](https://oc.to/install-ai-assistant) 2. Navigate to your Octopus Deploy instance. You will see a new icon in the bottom right corner of your Chrome browser 3. Click the AI Assistant icon in your browser to start using the assistant ## Using with on-premises instances or cloud instance allow lists For on-premises Octopus instances, ensure your server accepts HTTPS requests from IP address `51.8.40.170` to enable AI Assistant functionality. The DNS entry of your Octopus Server will also need to be resolvable over the Internet for the IP address to be able to communicate with it. Cloud instances with the `IP address allow list` feature activated will need to add `51.8.40.170` to the allow list to enable AI Assistant functionality: ![Control Centre](/docs/img/octopus-ai/assistant/cloud-portal.png) :::div{.warning} It is not possible to integrate Octopus AI Assistant with an on-premises Octopus instance that cannot accept HTTP requests from this public IP address. ::: ## Restricting all outbound network access A subset of features provided by the Chrome Extension, such as some community dashboards, can operate without any external network access. To prevent the Chrome Extension from making any outbound requests, set the `Site access` option to `On specific sites` and add the address of your Octopus instance: ![Chrome extension settings](/docs/img/octopus-ai/assistant/restrict-access.png) :::div{.warning} Restricting network access will prevent most features of the AI Assistant from working correctly. This setting is intended to be used for organizations that wish to use community dashboards and must prevent external network access. ::: # Octopus Recovery Agent Source: https://octopus.com/docs/octopus-ai/recovery-agent.md One of the most important aspects of Continuous Delivery is being able to rapidly recover when something goes wrong. There are many tools you can use to boost recovery, like feature toggles, blue/green deployments, canary deployments, and rollbacks. A short mean time to recovery is the hallmark of an elite software delivery engine. The Recovery Agent is designed to help you recover from failure, fast. In the current release the agent uses AI to analyze and diagnose the cause of deployment failures within the deployments you manage, and suggest potential steps to recover. In the future, the Agent will be able to execute those remediation steps for you. You will be in the loop and in control, but the Recovery Agent will take care of the heavy lifting in the heat of the moment. The Recovery Agent is now available to all customers starting with version *2026.1*. We will continue to evolve and refine the Recovery Agent alongside customer feedback. :::figure ![A screenshot of Octopus Deploy's Recovery Agent showing suggestions on how to fix a deployment failure](/docs/img/octopus-ai-assistant/recovery-agent.gif) ::: ## FAQ ### Is my data being used to train AI? No. Octopus Agents leverage pre-trained foundation AI models from providers like OpenAI. We host these models in Azure, as we do with all of our Octopus Cloud resources. Interactions with foundation AI models are entirely stateless. Your data is not used to train the models. Your data is not stored by the models, and they do not retain a "history" of interactions. ### How is my data being processed? Our foundation models are hosted within Azure, which provides strong guarantees about how your data is processed and stored. In short, your prompts (inputs) and completions (outputs) are not available to other customers and are not used to train the models. To ensure data residency requirements are met we use geo-based routing that will automatically assign requests from EU to foundation model hosted in Sweden, ensuring all data transmission and processing happens within an EU member country. Octopus complies with the European Union's General Data Protection Regulation ([GDPR](https://octopus.com/legal/gdpr)). It is important to note that Octopus never prints out or sends secrets to external services as long as they are clearly [identified as sensitive](https://octopus.com/docs/projects/variables/sensitive-variables). For a deeper understanding of the technical aspects of data processing, please consult [Data, privacy, and security for Azure Direct Models in Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/openai/data-privacy?tabs=azure-portal). ### Will you charge me for this feature? We are not charging customers for this feature. This may change in the future however we'll clearly inform customers before introducing pricing. ### Is there any setup or configuration for this feature? No! Once enabled, this feature will appear in your UI as shown in the gif above. ### Can I opt out of having the Recovery Agent in my instance? For Octopus Cloud instances, please get in touch with our [Support team](https://octopus.com/support), who can opt you out. For Octopus Server instances, this is managed through adding an OS system environment variable to your machines hosting your Octopus nodes. An example of this using Windows is below. You will need to restart your Octopus Server Service for this to take effect. If operating a multi-node Octopus instance (High Availability), you will need to add the variable on each node and restart the Octopus Server service on each node. :::figure ![A screenshot of a Windows OS System Environment Variable](/docs/img/octopus-ai-assistant/recovery-agent-optout-envvar.png) ::: # octopus account aws create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-aws-create.md Create an AWS account in Octopus Deploy ```text Usage: octopus account aws create [flags] Aliases: create, new Flags: --access-key string The AWS access key to use when authenticating against Amazon Web Services. -d, --description string A summary explaining the use of the account to other users. -D, --description-file file Read the description from file -e, --environment stringArray The environments that are allowed to use this account -n, --name string A short, memorable, unique name for this account. --region string The AWS region to use for this account. --secret-key string The AWS secret key to use when authenticating against Amazon Web Services. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account aws create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Configuring Authentik Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-authentik.md Authentication using [Authentik](https://goauthentik.io), a self-hosted identity management service. To use Authentik authentication with Octopus you will need to: 1. Configure Authentik to trust your Octopus Deploy instance (by setting it up as an app in Authentik). 2. Configure your Octopus Deploy instance to trust and use Authentik as an Identity Provider. ## Configure Authentik 1. [Install Authentik](https://docs.goauthentik.io/install-config/) using one of the many supported installation methods and complete the initial setup steps. 1. Open the **Authentik Console** in your web browser, and make sure that you're in the **Admin interface** (check the button in the top right beside your profile picture). 1. Navigate to **Applications** and click **Create with Provider** to create a new client that represents Octopus Deploy: - **Name**: `Octopus Deploy` or the domain name of the Octopus Deploy server is a good option - **Slug**: can be anything, the domain name of the Octopus Deploy server is a good option - **UI Settings** -> **Launch URL**: the URL of your Octopus Deploy server, so that users of Authentik can click through to Octopus from the user interface ![New Application](/docs/img/security/authentication/authentik/new-application.png) 1. Click **Next**, then choose **OAuth2/OpenID Provider** as the provider type, then click **Next**. ![New Application Provider](/docs/img/security/authentication/authentik/new-application-provider.png) 1. Enter the following details: - **Authentication Flow**: `default-provider-authorization-implicit-consent` - **Client type**: `Confidential` - **Redirect URIs/Origins** should have a single `Strict` entry of `https://your-octopus-url/api/users/authenticatedToken/GenericOidc` (replacing `https://your-octopus-url` with the URL of your Octopus Server). ![New Application Configure Provider](/docs/img/security/authentication/authentik/new-application-configure-provider.png) 1. Note down both the **Client ID** and **Client Secret** to configure in Octopus Deploy later. 1. Choose an existing **Signing Key** to be used for signing the JSON Web Tokens (JWTs), but leave **Encryption Key** empty as Octopus Deploy does not support encrypted tokens. Click **Next**. 1. If you want to bind a policy, group or user to control who is allowed to authenticate, you can configure this here, otherwise leave the page empty and click **Next**. 1. Review the application configuration, then click **Submit**. 1. Navigate to **Applications**, then **Providers** and open the settings for the new provider. Note down the **OpenID Configure Issuer** URL, this will be the **Issuer** value in Octopus Deploy. ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** should be `groups`, to pull group membership from Authentik. - **Username Claim Type** should be set to `preferred_username`. - **Resource** should be left unset. - **Scopes** should be set to `openid profile email groups`, by adding `groups` to the list. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer**, **Client ID** and **Client Secret** should be the values you noted when creating the application. - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 1. Click **Save** to apply the changes. 1. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. ### Assign external groups to Octopus teams (optional) If you want to use groups in Authentik to manage permissions in Octopus Deploy, you can assign those groups to **Teams** in the Octopus Portal. 1. Open the Octopus Portal and select **Configuration ➜ Teams**. 1. Either create a new **Team** or choose an existing one. 1. Under the **Members** section, select the option **Add External Group/Role**. ![Adding Octopus Teams from external providers](/docs/img/security/authentication/images/add-octopus-teams-external.png) 1. Enter the name of the Authentik group as the **Group/Role ID** and then choose the name that should be displayed in Octopus, then click **Add**. In this example, we're adding an existing Authentik group called `octopusTesters`. ![Add Octopus Teams Dialog](/docs/img/security/authentication/images/add-octopus-teams-external-dialog.png) 1. Save your changes by clicking the **Save** button. ### Octopus user accounts are still required Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. :::div{.hint} **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a matching **Identifier**. If there is no match, Octopus looks for a user account with a matching **Email Address**. If a user account is found, the External Identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. ::: :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Troubleshooting If you are having difficulty configuring Octopus to authenticate with Authentik, check your [server logs](/docs/support/log-files) for warnings and check the Authentik logs. You can access logs in Authentik by navigating to the **Admin interface**, then selecting **Events** and then **Logs** from the sidebar. ### Double- and triple-check your configuration Unfortunately security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember things are case-sensitive. - Remember to remove or add slash characters - they matter too! ### Check OpenID Connect metadata is working If you open the provider settings in the Authentik console, the **OpenID Configuration URL** specifies where the configuration metadata is hosted. Make sure that you can open this URL in a web browser and that this URL is reachable from your Octopus Deploy server. ### Contact Octopus Support If you aren't able to resolve the authentication problems yourself using these troubleshooting tips, please reach out to our [support team](https://octopus.com/support) with: 1. The contents of your OpenID Connect Metadata or the link to download it (see above). 2. A screenshot of the Octopus User Accounts, including their username, email address, and name. # Minimum TLS Requirements Source: https://octopus.com/docs/security/octopus-tentacle-communication/minimum-tls-requirements.md Octopus Server and Tentacle use **RSA-based X.509 certificates** to establish mutual trust. TLS configuration, cipher selection, and protocol enforcement are handled by the underlying operating system - **Schannel** on Windows and **OpenSSL** on Linux. If your environment enforces custom cipher or TLS hardening policies, it must still meet the **minimum requirements** below to ensure connectivity between the Octopus Server and Tentacle agents. As TLS negotiation depends on both peers, each host must support at least one compatible protocol, cipher suite, and signature algorithm to complete the handshake. :::div{.note} This document defines the *minimum configuration required* for Octopus to communicate securely. You may apply any additional TLS 1.2 / 1.3 hardening, cipher ordering, or protocol restrictions that meet your organization’s security standards, provided the required items below remain enabled. Octopus does **not** prescribe a complete enterprise security baseline \- only the interoperability boundary required for successful Server \<-\> Tentacle trust. ::: ## Protocols - TLS 1.2 \- Minimum recommended - TLS 1.3 \- Optional :::div{.warning} While self-hosted Octopus Server instances are compatible with TLS 1.0 and TLS 1.1 (if configured), these protocols are generally considered insecure and are **not recommended** for use. ::: ## Cipher Suites | Function | Windows / Schannel name | OpenSSL name | | :----------------------------------------- | :------------------------------------ | :--------------------------- | | ECDHE with RSA, AES-128, GCM, SHA-256 | TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 | ECDHE-RSA-AES128-GCM-SHA256 | | ECDHE with RSA, AES-256, GCM, SHA-384 | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 | ECDHE-RSA-AES256-GCM-SHA384 | | ECDHE with RSA, ChaCha20-Poly1305, SHA-256 | TLS_CHACHA20_POLY1305_SHA256 | TLS_CHACHA20_POLY1305_SHA256 | These suites provide forward secrecy (**ECDHE**) and authenticated encryption (**AES-GCM**) for Octopus’s RSA certificates. ## Signatures and Hashes - `SHA-256` - For **TLS 1.3**, ensure your configuration supports at least one of the following RSA signature schemes: - `rsa_pkcs1_sha256` - `rsa_pss_rsae_sha256` Many off-the-shelf TLS 1.3 hardening templates disable **PKCS#1 v1.5** padding entirely. Octopus uses an RSA certificate signed with PKCS\#1 v1.5, and removing this support will break TLS 1.3 handshakes. Ensure that either **RSA-PSS** or **RSA-PKCS#1 v1.5** remains available. ## Key Exchanges and Curves - `ECDHE` - `secp256r1 (P-256)` Octopus communication uses **ECDHE** key exchange for forward secrecy. Ensure that at least one elliptic curve remains enabled \- we recommend keeping `secp256r1` (NIST P-256). # External groups and roles Source: https://octopus.com/docs/security/users-and-teams/external-groups-and-roles.md Some of the authentication providers allow external groups or roles to be added as Members of Teams in Octopus. This section outlines how to add external groups/roles to Teams. Note: Not all authentication providers include support for external groups and roles. See [Authentication provider compatibility](/docs/security/authentication/auth-provider-compatibility) for more information. --- Adding external groups or roles to Octopus Teams can be helpful in controlling the permissions of users. In the case of Active Directory, when a user logs in to Octopus, we check the user's groups. If the user is in one of the groups assigned to an Octopus Team, they are considered part of that Team and will have the permissions set for that Team. Depending on which authentication providers you have enabled, the following buttons may appear on the Team page. :::figure ![](/docs/img/security/users-and-teams/images/members-buttons.png) ::: ## Add Active Directory group {#ExternalGroupsandRoles-AddActiveDirectorygroup} This button appears if you have the Active Directory authentication provider enabled, and when activated you will see the following dialog :::figure ![](/docs/img/security/users-and-teams/images/add-ad-group.png) ::: The search on this dialog will locate any groups in the domain that start with the text you provide. ### Trusted domains {#ExternalGroupsandRoles-TrustedDomainsTrustedDomains} If your environment has trusted domains, you can search for groups in the trusted domain by prefixing the search text with "**domain**" (where domain is the name of the Trusted Domain). :::figure ![](/docs/img/security/users-and-teams/images/add-ad-group-trusted-domains.png) ::: :::div{.hint} Domain trust is the only constraint when Active Directory users authenticate with Octopus. If the user does not exist in Octopus, but is able to authenticate with Active Directory, a new Octopus user will automatically be created for the user. This new user will be placed in the Everyone Team, which by default has limited permissions, so they won't be able to do anything until they are moved to a Team with additional permissions assigned to it. ::: ## Add external role {#ExternalGroupsandRoles-AddExternalRole} This button appears if you have an external authentication provider enabled (e.g. Microsoft Entra ID), and when activated you will see the following dialog :::figure ![](/docs/img/security/users-and-teams/images/add-external-role.png) ::: The Role Id corresponds to the role id from the external provider (learn more about [roles for Microsoft Entra ID](/docs/security/authentication/azure-ad-authentication)), Display Name is purely for display in the Team page. # How to automate Octopus Deploy upgrades Source: https://octopus.com/docs/administration/upgrading/guide/automate-upgrades.md Automating the Octopus Deploy upgrade ensures all essential steps are executed during an upgrade and can reduce the outage window to be around 5-10 minutes. This guide provides the steps necessary to automate the upgrade process. ## Overview This guide was written for upgrading Octopus Deploy on Windows. ## System Integrity Check Before performing any upgrade steps, we highly recommend performing a [System Integrity Check](/docs/administration/managing-infrastructure/diagnostics) on your live instance database. This is so we can check that the Database Schema is in the expected condition for the upgrade. If the integrity check passes, you are good to start the upgrade process. If it fails, please contact [support](https://octopus.com/support) with the [raw output of the task](/docs/support/get-the-raw-output-from-a-task), and we can get that fixed for you. ## Prep work Before going down the automation path, it is critical to back up the master key and license key. If anything goes wrong, you might need these keys to do a restore. It is better to have the backup and not need it than need the backup and not have it. The master key doesn't change, while your license key changes, at most, once a year. Back them up once to a secure location and move on. 1. Backup the Master Key. 1. Backup the License Key. ### Backup the Octopus Master Key Octopus Deploy uses the Master Key to encrypt and decrypt sensitive values in the Octopus Deploy database. The Master Key is securely stored on the server, not in the database. If the VM hosting Octopus Deploy is somehow destroyed or deleted, the Master Key goes with it. To view the Master Key, you will need login permissions on the server hosting Octopus Deploy. Once logged in, open up the Octopus Manager and click the view master key button on the left menu. :::figure ![](/docs/img/shared-content/upgrade/images/view-master-key.png) ::: Save the Master Key to a secure location, such as a password manager or a secret manager. An alternative means of accessing the Master Key is to run the `Octopus.Server.exe show-master-key` from the command line. Please note: you will need to be running as an administrator to do that. :::figure ![](/docs/img/shared-content/upgrade/images/master-key-command-prompt.png) ::: ### Backup the License Key Like the Master Key, the License Key is necessary to restore an existing Octopus Deploy instance. You can access the License Key by going to **Configuration ➜ License**. If you cannot access your License Key, please contact our [support team](https://octopus.com/support) and they can help you recover it. ## Upgrading single node Octopus Deploy instances A single node Octopus Deploy instance is an instance not configured for [high availability](/docs/administration/high-availability). The instance is running on a single Windows Server, and as such, you can run this script to: 1. Check for a new version (exit if the current version is the newest version). 1. Enable [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). 1. Backup all the files if it is a major version change. 1. Stop the instance. 1. Backup the database. 1. Download and install the MSI. 1. Upgrade the database. 1. Start the instance back up. Depending on the duration of the upgrade, your Octopus Server may still be starting up when the script completes. It will still be in maintenance mode, giving you a chance to log in to the Octopus Web Portal and verify things are working as expected. :::div{.hint} Replace the variable values at the start of the script with ones applicable to your installation. ::: ```powershell $url = 'https://samples.octopus.app' $apiKey = "API-YOUR-KEY" $octopusDeployDatabaseName = "OctopusDeploy" $sqlBackupFolderLocation = "\\ServerStorage\Share\DatabaseBackup" $fileBackupLocation = "\\ServerStorage\Share\FileBackup" $downloadDirectory = "${env:Temp}" # This is the default install location, but yours could be different $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" $serverExe = "$installPath\Octopus.Server.Exe" # Get the latest minor/patch version $currentVersion = (Invoke-RestMethod "$Url/api").Version $versionInfo = Invoke-RestMethod "https://octopus.com/downloads/latest/WindowsX64/OctopusServer/version" $upgradeVersion = $versionInfo.Version if ($upgradeVersion -eq $currentVersion) { Write-Host "No new versions found. Quitting..." exit } # Download the installer $msiFilename = "Octopus.$upgradeVersion-x64.msi" Write-Host "Downloading $msiFilename" Start-BitsTransfer -source $versionInfo.DownloadUrl -Destination "$downloadDirectory\$msiFilename" # Place Octopus into maintenance mode if (-not (Invoke-RestMethod -Uri "$url/api/maintenanceconfiguration" -Headers @{'X-Octopus-ApiKey' = $apiKey}).IsInMaintenanceMode) { Invoke-RestMethod ` -Method Put ` -Uri "$url/api/maintenanceconfiguration" ` -Headers @{'X-Octopus-ApiKey' = $apiKey} ` -Body (@{ Id = "maintenance"; IsInMaintenanceMode = $true } | ConvertTo-Json) } $versionSplit = $currentVersion -Split "\." $upgradeSplit = $upgradeVersion -Split "\." if ($versionSplit[0] -ne $upgradeSplit[0]) { Write-Host "Major version upgrade has been detected, backing up all the folders" $serverFolders = Invoke-RestMethod -Uri "$url/api/configuration/server-folders/values" -Headers @{'X-Octopus-ApiKey' = $apiKey} $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.LogsDirectory) $fileBackupLocation\TaskLogs /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackupLocation\TaskLogs" } $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.ArtifactsDirectory) $fileBackupLocation\Artifacts /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackupLocation\Artifacts" } $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.PackagesDirectory) $fileBackupLocation\Packages /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackupLocation\Packages" } $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.EventExportsDirectory) $fileBackupLocation\EventExports /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackupLocation\EventExports" } } # Finish any remaining tasks and stop the service & $serverExe node --instance="OctopusServer" --drain=true --wait=0 & $serverExe service --instance="OctopusServer" --stop try{ # Backup database $backupFileName = "$octopusDeployDatabaseName_" + (Get-Date -Format FileDateTime) + '.bak' $backupFileFullPath = "$sqlBackupFolderLocation\$backupFileName" # OctopusServer is the default instance name. If you have multiple instances, or are not using the default instance name, change the --instance parameter. $instanceConfig = (& $serverExe show-configuration --instance="OctopusServer" --format="JSON") | Out-String | ConvertFrom-Json $sqlConnection = New-Object System.Data.SqlClient.SqlConnection $sqlConnection.ConnectionString = $instanceConfig.'Octopus.Storage.ExternalDatabaseConnectionString' $command = $sqlConnection.CreateCommand() $command.CommandType = [System.Data.CommandType]'Text' $command.CommandTimeout = 0 Write-Host "Opening the connection" $sqlConnection.Open() $command.CommandText = "BACKUP DATABASE [$octopusDeployDatabaseName] TO DISK = '$backupFileFullPath' WITH FORMAT" $command.ExecuteNonQuery() Write-Host "Successfully backed up the database $octopusDeployDatabaseName" Write-Host "Closing the connection" $sqlConnection.Close() } catch { Write-Error $_.Exception exit 1 } # Running the installer $msiToInstall = "$downloadDirectory\$msiFilename" Write-Host "Installing $msiToInstall" $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i $msiToInstall /quiet" -Wait -PassThru).ExitCode Write-Output "Server MSI installer returned exit code $msiExitCode" # Upgrade database and restart service # OctopusServer is the default instance name. If you have multiple instances, or are not using the default instance name, change the --instance parameter. & $serverExe database --instance="OctopusServer" --upgrade & $serverExe service --instance="OctopusServer" --start & $serverExe node --instance="OctopusServer" --drain=false Remove-Item "$downloadDirectory\$msiFilename" ``` ## Upgrading High Availability Octopus Deploy instances Automating the upgrade of a Highly Available Octopus Deploy instance requires more than a single script. A degree of coordination is required to update all the nodes. In addition, specific actions, backing up the database, upgrading the database, enabling / disabling maintenance mode should only happen once. The recommendation is to use an Octopus Deploy runbook on another instance to upgrade the High Availability instance. You can get started with a [free Octopus Cloud account](https://octopus.com/free-signup) to do this. :::figure ![](/docs/img/administration/upgrading/guide/images/upgrade-diagram.png) ::: Each node will need a Tentacle installed on it. You will need two roles for this to work. - `HAServer`: All Tentacles will be assigned to this. - `HAServer-Primary`: This is the server which does the majority of the work (checking for new versions, upgrading the database, etc). The same sample script from above will be used, but it will be broken up into steps. For example: :::figure ![](/docs/img/administration/upgrading/guide/images/automated-upgrade-runbook-process.png) ::: ### Variables As this is a runbook, you'll want to create variables to share across the various steps. - `Upgrade.Octopus.Url` - the URL of your instance, for example `https://samples.octopus.app`. - `Upgrade.Octopus.ApiKey` - the API key of a service account with permissions to turn on and off maintenance mode. Please be sure to make this a sensitive variable! - `Upgrade.Download.Folder` - The network location of the shared folder to download the MSI to, for example `\\YOUR-NAS\ShareName` - `Upgrade.Database.Backup.Folder` - The network location of the shared folder to backup the database to, for example `\\YOUR-NAS\DatabaseBackupShare` - `Upgrade.Database.Name` - The name of your Octopus Deploy database. This is used for the backup script. - `Upgrade.File.Backup.Folder` - The folder name to store the file backups in. For example `\\YOUR-NAS\FileBackupShare` - `Upgrade.Octopus.HasNewVersion` - Output variable set in the first step. All other steps will use this variable as a run condition. Example value: `#{unless Octopus.Deployment.Error}#{Octopus.Action[Check for new Octopus Version to Download].Output.UpgradeFound}#{/unless}` - `Upgrade.Download.Msi` - Output variable storing the MSI name that is being downloaded. Example value: `#{Octopus.Action[Check for new Octopus Version to Download].Output.MSIToDownload}` ### Process The upgrade process itself is very similar to upgrading a single node instance. The key difference is the script is broken up into multiple steps. :::div{.hint} Aside from step 1, all steps should set a run condition to look at the variable `Upgrade.Octopus.HasNewVersion` ![](/docs/img/administration/upgrading/guide/images/automate-upgrade-variable-run-condition.png) ::: **1. Check for a new version (HAServer-Primary).** ```powershell # Get the latest minor/patch version $url = $OctopusParameters["Upgrade.Octopus.Url"] $downloadFolder = $OctopusParameters["Upgrade.Download.Folder"] $upgradeMsiList = Get-ChildItem -Path "$downloadFolder\*" -Include *.msi if ($upgradeMsiList.Count -gt 0) { Set-OctopusVariable -Name "UpgradeFound" -Value $true Write-Host "MSIs already exist in the download directory, exiting so they can be installed" exit } $currentVersion = (Invoke-RestMethod "$Url/api").Version $versionInfo = Invoke-RestMethod "https://octopus.com/downloads/latest/WindowsX64/OctopusServer/version" $upgradeVersion = $versionInfo.Version if ($upgradeVersion -eq $currentVersion) { Set-OctopusVariable -Name "UpgradeFound" -Value $false Write-Host "No new versions found. Quitting..." exit } # Download the installer $msiFilename = "Octopus.$upgradeVersion-x64.msi" Set-OctopusVariable -Name "MSIToDownload" -value $msiFileName $outFile = "$downloadFolder\$msiFilename" if (Test-Path $outFile) { Set-OctopusVariable -Name "UpgradeFound" -Value $true Write-Host "The latest version has already been download and is waiting to be installed" exit } Write-Host "Downloading $msiFilename" Start-BitsTransfer -source $versionInfo.DownloadUrl -destination $outFile Set-OctopusVariable -Name "UpgradeFound" -Value $true ``` **2. Put the server into maintenance mode (HAServer-Primary).** ```powershell $url = $OctopusParameters["Upgrade.Octopus.Url"] $apiKey = $OctopusParameters["Upgrade.Octopus.ApiKey"] # This is the default install location, but yours could be different $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" # Check to see if this is a re-run, if all the nodes are stopped then this will fail try { $octopusApi = Invoke-RestMethod -uri "$url/api" } catch { Write-Host "Error calling api endpoint for $url, exiting" exit } # Place Octopus into maintenance mode if (-not (Invoke-RestMethod -Uri "$url/api/maintenanceconfiguration" -Headers @{'X-Octopus-ApiKey' = $apiKey}).IsInMaintenanceMode) { Invoke-RestMethod ` -Method Put ` -Uri "$url/api/maintenanceconfiguration" ` -Headers @{'X-Octopus-ApiKey' = $apiKey} ` -Body (@{ Id = "maintenance"; IsInMaintenanceMode = $true } | ConvertTo-Json) } ``` **3. Backup files if major version change (HAServer-Primary).** ```powershell $url = $OctopusParameters["Upgrade.Octopus.Url"] $apiKey = $OctopusParameters["Upgrade.Octopus.ApiKey"] $fileBackupFolder = $OctopusParameters["Upgrade.File.Backup.Folder"] $msiToDownload = $OctopusParameters["Upgrade.Download.Msi"] $currentVersion = (Invoke-RestMethod "$Url/api").Version $versionSplit = $currentVersion -Split "\." $upgradeSplit = $msiToDownload -Split "\." if ($versionSplit[0] -ne $upgradeSplit[1]) { Write-Host "Major version upgrade has been detected, backing up all the folders" $serverFolders = Invoke-RestMethod -Uri "$url/api/configuration/server-folders/values" -Headers @{'X-Octopus-ApiKey' = $apiKey} $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.LogsDirectory) $fileBackUpFolder\TaskLogs /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackUpFolder\TaskLogs" } $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.ArtifactsDirectory) $fileBackUpFolder\Artifacts /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackUpFolder\Artifacts" } $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.PackagesDirectory) $fileBackUpFolder\Packages /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackUpFolder\Packages" } $msiExitCode = (Start-Process -FilePath "robocopy" -ArgumentList "$($serverFolders.EventExportsDirectory) $fileBackUpFolder\EventExports /mir" -Wait -PassThru).ExitCode if ($msiExitCode -ge 8) { Throw "Unable to copy files to $fileBackUpFolder\EventExports" } } ``` **4. Stop all nodes (HAServer).** ```powershell # Upgrading the MSI on a single server updates all instances, shut them all down first $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" $serverExe = "$installPath\Octopus.Server.exe" $instanceList = (& $serverExe list-instances --format="JSON") | Out-String | ConvertFrom-Json Write-Host "Found $($instanceList.length) Octopus instances" foreach ($instance in $instanceList) { Write-Host "Stopping $($instance.InstanceName)" # Finish any remaining tasks and stop the service & $serverExe node --instance="$($instance.InstanceName)" --drain=true --wait=0 & $serverExe service --instance="$($instance.InstanceName)" --stop } ``` **5. Backup Database (HAServer-Primary)** ```powershell $BackupFolderLocation = $OctopusParameters["Upgrade.Database.Backup.Folder"] $OctopusDatabaseName = $OctopusParameters["Upgrade.Database.Name"] $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" $serverExe = "$installPath\Octopus.Server.exe" $backupFileName = "$OctopusDatabaseName_" + (Get-Date -Format FileDateTime) + '.bak' $backupFileFullPath = "$backupFolderLocation\$backupFileName" # OctopusServer is the default instance name. If you have multiple instances, or are not using the default instance name, change the --instance parameter. $instanceConfig = (& $serverExe show-configuration --instance="OctopusServer" --format="JSON") | Out-String | ConvertFrom-Json $sqlConnection = New-Object System.Data.SqlClient.SqlConnection $sqlConnection.ConnectionString = $instanceConfig.'Octopus.Storage.ExternalDatabaseConnectionString' $command = $sqlConnection.CreateCommand() $command.CommandType = [System.Data.CommandType]'Text' $command.CommandTimeout = 0 Write-Host "Opening the connection" $sqlConnection.Open() Write-Host "Backing up the database" $command.CommandText = "BACKUP DATABASE [$OctopusDatabaseName] TO DISK = '$backupFileFullPath' WITH FORMAT" $command.ExecuteNonQuery() Write-Host "Successfully backed up the database $octopusDatabaseName" Write-Host "Closing the connection" $sqlConnection.Close() ``` **6. Install the MSI (HAServer).** ```powershell $downloadFolder = $OctopusParameters["Upgrade.Download.Folder"] $upgradeMsiList = Get-ChildItem -Path "$downloadFolder\*" -Include *.msi if ($upgradeMsiList.Count -le 0) { Write-Host "No MSIs found, exiting step" exit } $msiToInstall = $upgradeMsiList[0] # Running the installer Write-Host "Installing $msiToInstall" $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i $msiToInstall /quiet" -Wait -PassThru).ExitCode Write-Output "Server MSI installer returned exit code $msiExitCode" ``` **7. Upgrade the database (HAServer-Primary).** ```powershell $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" $serverExe = "$installPath\Octopus.Server.exe" # OctopusServer is the default instance name. If you have multiple instances, or are not using the default instance name, change the --instance parameter. & $serverExe database --instance="OctopusServer" --upgrade ``` As of **2023.2.9755**, Octopus will terminate a database upgrade if running nodes are detected. Sometimes the nodes simply haven't exited cleanly. And it might take a few seconds for Octopus to recognize this. It's possible to use a simple retry loop to handle this automatically. ```powershell $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" $serverExe = "$installPath\Octopus.Server.exe" $maxRetry = 3 $retryInterval = 10 $attempts = 0 While ($true) { $attempts++ if ($attempts -gt $maxRetry) { Write-Error "Upgrade failed after $maxRetry retries" exit 1 } # OctopusServer is the default instance name. If you have multiple instances, or are not using the default instance name, change the --instance parameter. & $serverExe database --instance="OctopusServer" --upgrade if ($LastExitCode -ne 0) { Write-Warning "Upgrade failed. Retrying in $retryInterval seconds..." Start-Sleep -Seconds $retryInterval } } ``` **8. Restart all nodes (HAServer).** ```powershell # A server could have multiple instances, as we shut them all down earlier, start them all back up $installPath = "${env:ProgramFiles}\Octopus Deploy\Octopus" $serverExe = "$installPath\Octopus.Server.exe" $instanceList = (& $serverExe list-instances --format="JSON") | Out-String | ConvertFrom-Json Write-Host "Found $($instanceList.length) Octopus instances" foreach ($instance in $instanceList) { Write-Host "Starting $($instance.InstanceName)" # Finish any remaining tasks and stop the service & $serverExe service --instance="$($instance.InstanceName)" --start & $serverExe node --instance="$($instance.InstanceName)" --drain=false } ``` ### Triggers and notifications Now that the upgrade process is in a runbook, you can create a trigger to check once a day or week. In addition, you can set up notifications to notify you when a new version is found and when you need to disable maintenance mode. How you use the runbook is up to you. # Troubleshooting Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/troubleshooting.md If your upgrade from **Octopus 2.6** to **Octopus 2018.10 LTS** doesn't go smoothly, this page will help you find a solution. If this page doesn't help, contact support. ## Rolling back {#Troubleshooting-RollingBack} The **Octopus 2.6** to **Octopus 2018.10 LTS** upgrade is lossless, meaning you shouldn't lose any data as a result of installing the new MSI. Your Raven database and configuration settings are not deleted. If your number one priority is to get up and running again, you can simply run the **Octopus 2.6** MSI again, and the previous version will install over the top of **2018.10 LTS**, allowing you to diagnose the issue at your leisure. ## Hydra log files {#Troubleshooting-HydraLogFiles} Hydra writes to two log files during its deployment. The first is located in the folder that the Hydra package is unpacked to on deployment. It's named `upgradelog.log` and will write details about what Hydra is doing during an upgrade. The second log is also in the Hydra package directory. It has a random filename and is used purely as an output for the MSI installation process. Hydra will delete this file if it detects a successful install. If the file is present, this means the installation has probably failed. You can also refer to the Windows Event Log as well as Scheduled Tasks for more information on the installation process. Note that the Scheduled Task will expire after 5 min, and the results may no longer be available. ## Common issues {#Troubleshooting-CommonIssues} This section describes some common upgrade issues and ways to resolve them. ### Tentacle does not upgrade properly #### Symptoms #1 {#Troubleshooting-Symptoms#1} The **Octopus 2018.10 LTS** server cannot communicate with one or more Tentacles. You may see an error similar to the following in the Server logs: ```powershell Halibut.Transport.Protocol.ConnectionInitializationFailedException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameBody(Int32 readBytes, Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Read(Byte[] buffer, Int32 offset, Int32 count) at System.IO.StreamReader.ReadBuffer() at System.IO.StreamReader.ReadLine() at Halibut.Transport.Protocol.MessageExchangeStream.ReadRemoteIdentity() in y:\work\7ab39c94136bc5c6\source\Halibut\Transport\Protocol\MessageExchangeStream.cs:line 124 at Halibut.Transport.Protocol.MessageExchangeStream.ExpectServerIdentity() in y:\work\7ab39c94136bc5c6\source\Halibut\Transport\Protocol\MessageExchangeStream.cs:line 187 at Halibut.Transport.Protocol.MessageExchangeProtocol.PrepareExchangeAsClient() in y:\work\7ab39c94136bc5c6\source\Halibut\Transport\Protocol\MessageExchangeProtocol.cs:line 41 --- End of inner exception stack trace --- at Halibut.Transport.Protocol.MessageExchangeProtocol.PrepareExchangeAsClient() in y:\work\7ab39c94136bc5c6\source\Halibut\Transport\Protocol\MessageExchangeProtocol.cs:line 51 at Halibut.HalibutRuntime.<>c__DisplayClass6.b__5(MessageExchangeProtocol protocol) in y:\work\7ab39c94136bc5c6\source\Halibut\HalibutRuntime.cs:line 115 at Halibut.Transport.SecureClient.ExecuteTransaction(Action`1 protocolHandler) in y:\work\7ab39c94136bc5c6\source\Halibut\Transport\SecureClient.cs:line 60 ``` And an error such as the following in the Tentacle logs: ```powershell 2015-07-20 12:04:52.2324 7 ERROR Invalid request System.Net.ProtocolViolationException: Request line should have three parts at Pipefish.Transport.SecureTcp.ProtocolParser.ParseRequest(Stream clientStream, Method& method, Uri& uri, RequestHeaders& headers, String& protocol) in y:\work\3cbe05672d69a231\source\Pipefish.Transport.SecureTcp\ProtocolParser.cs:line 50 at Pipefish.Transport.SecureTcp.Server.SecureTcpServer.ApplyProtocol(AuthorizationResult authorizationResult, EndPoint clientEndPoint, String clientThumbprint, Stream clientStream) in y:\work\3cbe05672d69a231\source\Pipefish.Transport.SecureTcp\Server\SecureTcpServer.cs:line 141 ``` #### Solution #1 {#Troubleshooting-Solution#1} If you see a reference to `Halibut` in the server log and `Pipefish` in the client log, that's an indication that the Tentacle is still using 2.x binaries. The easiest way to fix this is to RDP into the Tentacle machine and click the Reinstall button. This will reset the Tentacle service to make sure it points to the new binaries. Be aware that this will reset the Tentacle service account to run as Local System. If you are using a custom service account, you will have to reconfigure it. :::figure ![](/docs/img/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/images/3278286.png) ::: #### Symptoms #2 {#Troubleshooting-Symptoms#2} The **Octopus 2018.10 LTS** Server cannot communicate with the Tentacle. When investigated, the Windows Service for the Tentacle is pointing at a 2.6 instance of the Octopus Tentacle. #### Solution #2 {#Troubleshooting-Solution#2} There are a few potential reasons for this: - The MSI upgrade failed. If this is the case, you will be able to find a log file with a random filename in the Hydra package directory on the Tentacle server. It should show reasons for the MSI failure. - The MSI upgraded the Tentacle, but the Windows Service is pointing at an old version. If an upgrade succeeded but the Windows Service is still running the 2.6 instance, you will have to click the Reinstall link as per the Solution #1 above. If the upgrade itself failed, this can be due to a previous installation of a 2.0 version of the Octopus Tentacle (which was fixed in 2.1). Originally, the MSI installed itself on a per-user basis rather than per-machine. This means that Hydra is unable to uninstall the previous version prior to installing the latest version of Tentacle. In this case, you will have to **log onto your Tentacle machine as the user who first installed the 2.0 version of the Tentacle**. You can then either run `Hydra.exe` directly, or manually uninstall the previous Tentacle and install the latest version of Tentacle. ### I've lost all my NuGet packages #### Symptoms {#Troubleshooting-Symptoms} After migration, none of the NuGet packages that were present in the internal feed are available. #### Solution {#Troubleshooting-Solution} NuGet packages are not included in the Raven database backup, so will not be automatically moved to your new server and to the correct location. To move your NuGet packages, follow the [instructions in the Upgrade documentation](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/upgrade-with-a-new-server-instance/#migrate-data-265-2018-10-lts). After moving the files and restarting the service, your packages should be reindexed and available. # Upgrading from Octopus 3.x to the latest version Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-3.x-to-modern.md You should be safe doing an in-place upgrade of 3.x to the latest version of Octopus Deploy. With that said, the last version of 3.x, 3.17.14, was released on November 12th, 2017. Fundamentally, Octopus is almost an entirely different product. We did our best to maintain backward compatibility, but there is a risk a hyper-specific scenario was missed or a breaking change was introduced. Here is an example of changes made to Octopus Deploy since 3.17.14 was released. - The majority of endpoints in the API can accept a `Space-Id`, for example `/api/Spaces-1/projects?skip=0&take=100` whereas before it was `/api/projects?skip=0&take=100`. If a `Space-Id` isn't supplied, the default space is used. - Teams can be assigned to multiple roles and spaces. Before, a team could be assigned to only one role. - Unique internal package feed per space. Each space has a subfolder in the `Packages` directory to keep them segregated on the file system. Before, a package would be located at `C:\Octopus\packages\MyPackage.2020.1.1.zip`. Now it is `C:\Octopus\packages\Spaces-1\MyPackage.2020.1.1.zip` - Almost every table in the database had a `Space-Id` column added to it. - Workers were introduced. - Azure Management APIs were deprecated. - Support for Kubernetes was introduced. - Terraform support was introduced. - Raised the [minimum requirements for hosting and using Octopus Server](https://octopus.com/blog/raising-minimum-requirements-for-octopus-server) (both Windows and SQL Server). - Execution containers running on Docker on workers were introduced. ## Prep work Before starting the upgrade, it is critical to back up the master key and license key. If anything goes wrong, you might need these keys to do a restore. It is better to have the backup and not need it than need the backup and not have it. The master key doesn't change, while your license key changes, at most, once a year. Back them up once to a secure location and move onto the next steps. 1. Backup the Master Key. 1. Backup the License Key. ### Backup the Octopus Master Key Octopus Deploy uses the Master Key to encrypt and decrypt sensitive values in the Octopus Deploy database. The Master Key is securely stored on the server, not in the database. If the VM hosting Octopus Deploy is somehow destroyed or deleted, the Master Key goes with it. To view the Master Key, you will need login permissions on the server hosting Octopus Deploy. Once logged in, open up the Octopus Manager and click the view master key button on the left menu. :::figure ![](/docs/img/shared-content/upgrade/images/view-master-key.png) ::: Save the Master Key to a secure location, such as a password manager or a secret manager. An alternative means of accessing the Master Key is to run the `Octopus.Server.exe show-master-key` from the command line. Please note: you will need to be running as an administrator to do that. :::figure ![](/docs/img/shared-content/upgrade/images/master-key-command-prompt.png) ::: ### Backup the License Key Like the Master Key, the License Key is necessary to restore an existing Octopus Deploy instance. You can access the License Key by going to **Configuration ➜ License**. If you cannot access your License Key, please contact our [support team](https://octopus.com/support) and they can help you recover it. ## Standard upgrade process The standard upgrade process is an in-place upgrade. In-place upgrades update the binaries in the install directory and update the database. The guide below includes additional steps to backup key components to make it easier to rollback in the unlikely event of a failure. :::div{.problem} While an in-place upgrade will work, it involves risk as you are upgrading from a version released back in 2017 (or earlier). Please see the risk mitigation sections below for steps on how to mitigate that risk. ::: ### Overview The steps for this are: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Backup the database. 1. Do an in-place upgrade. 1. Test the upgraded instance. 1. Disable maintenance mode. ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Upgrade High Availability In general, upgrading a high available instance of Octopus Deploy follows the same steps as a typical in-place upgrade. Download the latest MSI and install that. The key difference is to upgrade only one node first, as this will upgrade the database, then upgrade all the remaining nodes. :::div{.warning} Attempting to upgrade all nodes at the same time will most likely lead to deadlocks in the database. ::: The process should look something like this: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Stop all the nodes. 1. Backup the database. 1. Select one node to upgrade, wait until finished. 1. Upgrade all remaining nodes. 1. Start all remaining stopped nodes. 1. Test upgraded instance. 1. Disable maintenance mode. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ## Risk mitigation recommended approach - create a cloned instance The recommended approach to risk mitigation is to create a cloned instance, upgrade that instance, and test out the new functionality with any integrations. From there, you can migrate over to the cloned instance or do an in-place upgrade of your existing instance and use the cloned instance to test future upgrades. This provides the means to test an upgrade without affecting your CI/CD pipeline. ### Overview Creating a clone of an existing instance involves: 1. Enable maintenance mode on the main instance. 1. Backup the database of the main instance. 1. Disable maintenance mode on the main instance. 1. Restore the backup of the main instance's database as a new database on the desired SQL Server. 1. Download the same version of Octopus Deploy as your main instance. 1. Installing that version on a new server and configure it to point to the **cloned/restored** database. 1. Copying all the files from the backed up folders from the source instance. 1. Optional: Disable all deployment targets. 1. Upgrade cloned instance. 1. Test cloned instance. Verify all API scripts, CI integrations, and deployments work. 1. If migrating, then migrate over. Otherwise, leave the test instance alone, backup the folders and database, and upgrade the main instance. ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. Once the MSI is finished, the **Octopus Manager** will automatically launch. Follow the wizard, and on the section where you configure the database, select the pre-existing database. :::figure ![](/docs/img/shared-content/upgrade/images/select-existing-database.png) ::: Selecting an existing database will ask you to enter the Master Key. :::figure ![](/docs/img/shared-content/upgrade/images/enter-master-key.png) ::: Enter the Master Key you backed up earlier, and the manager will verify the connection works. Finish the wizard, keep an eye on each setting to ensure you match your main instance. For example, if your main instance uses Active Directory, your cloned instance should also be configured to use Active Directory. After the wizard is finished and the instance is configured, log in to the cloned instance to ensure your credentials still work. ### Copy all the files from the main instance After the instance has been created, copy all the contents from the following folders. - _Artifacts_, the default is `C:\Octopus\Artifacts` - _Packages_, the default is `C:\Octopus\Packages` - _Tasklogs_, the default is `C:\Octopus\Tasklogs` - _EventExports_, the default is `C:\Octopus\EventExports` Failure to copy over files will result in: - Empty deployment screens - Missing packages on the internal package feed - Missing project or tenant images - Missing archived events - And more ### Disabling All Targets/Workers/Triggers/Subscriptions - Optional Cloning an instance includes cloning all certificates. Assuming you are not using polling Tentacles, all the deployments will "just work." That is by design if the VM hosting Octopus Deploy is lost and you have to restore Octopus Deploy from a backup. Just working does have a downside, as you might have triggers and other items configured. These items could potentially perform deployments. You can run this SQL Script on your cloned instance to disable everything. ```sql Use [OctopusDeploy] go DELETE FROM OctopusServerNode IF EXISTS (SELECT null FROM sys.tables WHERE name = 'OctopusServerNodeStatus') DELETE FROM OctopusServerNodeStatus UPDATE Subscription SET IsDisabled = 1 UPDATE Machine SET IsDisabled = 1 IF EXISTS (SELECT null FROM sys.tables WHERE name = 'Worker') UPDATE Worker SET IsDisabled = 1 DELETE FROM ExtensionConfiguration WHERE Id in ('authentication-octopusid', 'jira-integration') ``` :::div{.hint} Remember to replace `OctopusDeploy` with the name of your database. ::: ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Migrating to a new instance It will be possible to run both the old and cloned instances side by side. Both of them can deploy to the same targets (assuming you are not using polling Tentacles). But there are a few items to keep in mind. - The Octopus Server is tightly coupled with Calamari. Deploying to the same target from both servers will result in Calamari getting upgraded/downgraded a lot. - The newer Octopus Server will prompt you to upgrade the Tentacles. While running both instances side by side, you will want to avoid this. - Unless the cloned instance has the same domain name, polling Tentacles will not connect to the cloned instance. A clone of the polling Tentacles might need to be created. - The thumbprints for certificates and other sensitive items are stored in the Octopus Deploy database. Cloning the database cloned those values. - **You must update the Installation ID on the cloned instance.** Cloning copies the Installation ID from the original, which means both instances will report [telemetry](/docs/security/outbound-requests/telemetry) under the same identifier. This corrupts usage data. See [Creating a test instance](/docs/administration/upgrading/guide/creating-test-instance) for the SQL script to generate a new Installation ID. ### Considerations As you migrate your instance, here are few items to consider. 1. Will the new instance's domain name be the same, or will it change? For example, will it change from `https://octopusdeploy.mydomain.com` to `https://octopus.mydomain.com`. If it changes and you are using polling Tentacles, you will need to create new Tentacle instances for the new Octopus Deploy instance. 2. What CI, or build servers, integrate with Octopus Deploy? Do those plug-ins need to be updated? You can find several of the plug-ins on the [downloads page](https://octopus.com/downloads). 3. Do you have any internally developed tools or scripts that invoke the Octopus API? We've done our best to maintain backward compatibility, but there might be some changes. 4. What components do you use the most? What does a testing plan look like? 5. Chances are there are new features and functionality you haven't been exposed to. How will you train people on the new functionality? If unsure, please [contact us](https://octopus.com/support) to get pointed in the right direction. ### Drift concerns While it is possible to run two instances side by side, each minute that passes, the two instances will drift further apart. Changes to the deployment process, new packages, new and releases deployments will be happening during this time. If you find yourself needing more time than a few days, a week tops, consider setting up a test instance. Or using this newly cloned instance as a test instance. Work out all the kinks on the test instance, then restart the cloning process on a fresh instance. :::div{.hint} If you are unsure how long the migration will take, consider setting up a test instance first. Work out all the kinks, then start the cloning process. ::: ### Polling Tentacles A Polling Tentacle can only connect to one Octopus Deploy instance. It connects via DNS name or IP address. If the new instance's DNS name changes - for example, the old instance was `https://octopusdeploy.mydomain.com` with the new instance set to `https://octopus.mydomain.com` - you'll need to clone each Polling Tentacle instance. Each Polling Tentacle will need to be cloned on each deployment target. To make things easier, we have provided [this script](https://github.com/OctopusDeployLabs/SpaceCloner/blob/master/CloneTentacleInstance.ps1) to help clone a Tentacle instance. That script will look at the source instance, determine the roles, environments, and tenants, then create a cloned Tentacle and register that cloned Tentacle with your cloned instance. :::div{.hint} Any script that clones a Tentacle instance must be run on the deployment target. It cannot be run on your development machine. ::: ### Executing the cutover Cutting over from the old instance to the new instance will require a bit of downtime and should be done off hours. 1. Enable maintenance mode on the old instance to put it into read-only mode. 1. Ensure all CI servers are pointing to the new instance (or change DNS). 1. You don't have to upgrade Tentacles right away. Newer versions of Octopus Deploy [can communicate with older versions of Tentacles](/docs/support/compatibility). You can upgrade a set at a time instead of upgrading everything, starting in 2020.x you can perform a search on the deployment target page and update only the returned Tentacles. Or, you can [upgrade Tentacles per environment](https://www.youtube.com/watch?v=KVxdSdYAqQU&t=352s). ### Backup the server folders The server folders store large binary data outside of the database. By default, the location is `C:\Octopus`. If you have High Availability configured, they will likely be stored on a NAS or some other file share. - **Packages**: The default location is `C:\Octopus\Packages\`. It stores all the packages in the internal feed. - **Artifacts**: The default location is `C:\Octopus\Artifacts`. It stores all the artifacts collected during a deployment along with project images. - **Tasklogs**: The default location is `C:\Octopus\Tasklogs`. It stores all the deployment logs. - **EventExports**: The default location is `C:\Octopus\EventExports`. It stores all the exported event audit logs. Any standard file-backup tool will work, even [RoboCopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy). Very rarely will an upgrade change these folders. The release notes will indicate if these folders are going to be modified. ### In-place upgrade of the main instance Upgrading your main Octopus Deploy instance should follow the same steps you did with the test or cloned instance. Don't forget to run your backups! ## Risk mitigation alternative approach - create a test instance Creating and migrating to a cloned instance can be quite a bit of work. You have to worry about drift and getting new compute resources allocated. An alternative approach to the cloned instance is creating a test instance with only a handful of projects. Test the upgrade with that test instance and then do the upgrade of your main instance. ### Overview The steps for this are: 1. Download the same version of Octopus Deploy as your main instance. 1. Install Octopus Deploy on a new VM. 1. Export a subset of projects from the main instance. 1. Import that subset of projects to the test instance. 1. Download the latest version of Octopus Deploy. 1. Backup the test instance database. 1. Upgrade that test instance to the latest version of Octopus Deploy. 1. Test and verify the test instance. 1. Enable maintenance mode on the main instance. 1. Backup the database on the main instance. 1. Backup all the folders on the main instance. 1. Do an in-place upgrade of your main instance. 1. Test the upgraded main instance. 1. Disable maintenance mode. ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. After you install Octopus Deploy, the Octopus Manager will automatically launch. Follow the wizard. A few notes: 1. You can reuse your same license key on up to three unique instances of Octopus Deploy. We determine uniqueness based on the database it connects to. If you are going to exceed the three instance limit, please [contact us](https://octopus.com/support) to discuss your options. 1. Create a new database for this test instance. Restoring a backup will cause Octopus to treat this as a cloned instance, with the same targets, certificates, and keys. 1. Run the test instance database on the same version of SQL Server as the main instance. Only deviate when you plan on upgrading SQL Server. ### Export/import subset of projects using export/import projects feature The Export/Import Projects feature added in **Octopus Deploy 2021.1** can be used to export/import projects to a test instance. Please see the up to date [documentation](/docs/projects/export-import) to see what is included. ### Export subset of projects using the data migration tool All versions of Octopus Deploy since version 3.x has included a [data migration tool](/docs/administration/data/data-migration/). The Octopus Manager only allows for the migration of all the data. We only need a subset of data. Use the [partial export](/docs/octopus-rest-api/octopus.migrator.exe-command-line/partial-export) command-line option to export a subset of projects. Run this command for each project you wish to export on the main, or production, instance. Create a new folder per project: ``` Octopus.Migrator.exe partial-export --instance=OctopusServer --project=AcmeWebStore --password=5uper5ecret --directory=C:\Temp\AcmeWebStore --ignore-history --ignore-deployments --ignore-machines ``` :::div{.hint} This command ignores all deployment targets to prevent your test instance and your main instance from deploying to the same targets. ::: ### Import subset of projects using the data migration tool The data migration tool also includes [import functionality](/docs/octopus-rest-api/octopus.migrator.exe-command-line/import). First, copy all the project folders from the main instance to the test instance. Then run this command for each project: ``` Octopus.Migrator.exe import --instance=OctopusServer --password=5uper5ecret --directory=C:\Temp\AcmeWebStore ``` ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Backup the server folders The server folders store large binary data outside of the database. By default, the location is `C:\Octopus`. If you have High Availability configured, they will likely be stored on a NAS or some other file share. - **Packages**: The default location is `C:\Octopus\Packages\`. It stores all the packages in the internal feed. - **Artifacts**: The default location is `C:\Octopus\Artifacts`. It stores all the artifacts collected during a deployment along with project images. - **Tasklogs**: The default location is `C:\Octopus\Tasklogs`. It stores all the deployment logs. - **EventExports**: The default location is `C:\Octopus\EventExports`. It stores all the exported event audit logs. Any standard file-backup tool will work, even [RoboCopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy). Very rarely will an upgrade change these folders. The release notes will indicate if these folders are going to be modified. ### In-place upgrade of the main instance Upgrading your main Octopus Deploy instance should follow the same steps you did with the test or cloned instance. Don't forget to run your backups! ### Upgrade High Availability In general, upgrading a high available instance of Octopus Deploy follows the same steps as a typical in-place upgrade. Download the latest MSI and install that. The key difference is to upgrade only one node first, as this will upgrade the database, then upgrade all the remaining nodes. :::div{.warning} Attempting to upgrade all nodes at the same time will most likely lead to deadlocks in the database. ::: The process should look something like this: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Stop all the nodes. 1. Backup the database. 1. Select one node to upgrade, wait until finished. 1. Upgrade all remaining nodes. 1. Start all remaining stopped nodes. 1. Test upgraded instance. 1. Disable maintenance mode. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ## Rollback failed upgrade While unlikely, an upgrade may fail. It could fail on a database upgrade script, SQL Server version is no longer supported, license check validation, or plain old bad luck. Depending on what failed, you have a decision to make. If the cloned instance upgrade failed, it might make sense to start all over again. Or, it might make sense to roll back to a previous version. In either case, if you decide to roll back, the process will be: 1. Restore the database backup. 1. Restore the folders. 1. Download and install the previously installed version of Octopus Deploy. 1. Do some sanity checks. 1. If maintenance mode is enabled, disable it. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Restore Octopus folders Octopus Deploy expects the artifacts, packages, tasklog, and event export folders to be in a specific format. The best chance of success is to: 1. Copy the existing folders to a safe location. 2. Delete the contents of the existing folders. 3. Copy the contents of the existing folders from the backup. 4. Once the rollback is complete, delete the copy from the first step. ### Find and download the previous version of Octopus Deploy Octopus Deploy stores the installation history in the database. Run this query on your Octopus Deploy database if unsure as to which version to download: ```sql SELECT TOP 5 [Version] FROM [dbo].[OctopusServerInstallationHistory] ORDER BY Installed desc ``` When you know the version to install, go to the [previous downloads page](https://octopus.com/downloads/previous). ### Installing the previous version The key configuration items, such as connection string, files, instance information, etc., are not stored in the install directory of Octopus Deploy. To install the previous version, first, uninstall Octopus Deploy. Uninstalling will only delete items from the install directory, or `C:\Program Files\Octopus Deploy\Octopus`. Then run the MSI to install the previous version. ## Additional items to note Earlier versions of 3.x, including 3.1, 3.4, and 3.5, also carry some additional caveats to make a note of. Before upgrading to a modern version of Octopus Deploy, please keep these in mind. ### Upgrading to Octopus 3.1 or greater Summary: Tentacle was upgraded from .NET 4.0 to .NET 4.5 to enable TLS 1.2. :::div{.success} **You can upgrade to Octopus Server 3.1 without upgrading any Tentacles and get the new 3.1 deployment features because Calamari will continue to work on both Tentacle 3.0 and 3.1.** ::: This is the first modern version of Octopus Server where there has been a Tentacle upgrade, and it has caused some confusion. This section aims to answer some of the most commonly asked questions about upgrading to Octopus 3.1 and the impact on Tentacles. **Am I required to upgrade to Tentacle 3.1?** No, you aren't required to upgrade to Tentacle 3.1. Tentacle 3.0 will still work and benefit from Calamari's latest version and the deployment features we shipped in **Octopus 3.1**. **What changed with Tentacle 3.1?** The Octopus-Tentacle communication protocol in 3.1 can use TLS 1.2, which requires .NET 4.5 to be installed on the server. **When should I upgrade to Tentacle 3.1?** We recommend upgrading to Tentacle 3.1 as soon as you are able. Upgrading Tentacles in **Octopus 3.1** is automated and can be done through the Environments page. The main benefit you'll get is the Octopus-Tentacle communication protocol can use TLS 1.2. **What would stop me from upgrading to Tentacle 3.1?** [Your server needs to support .NET 4.5](https://msdn.microsoft.com/en-us/library/8z6watww%28v=vs.110%29.aspx). Tentacle 3.1 requires .NET 4.5 to be installed on the server, which is what enables TLS 1.2 support, and .NET 4.5 is supported on Windows Server 2008 SP2 or newer. This means Windows Server 2003 and Windows Server 2008 SP1 are not supported for Octopus Server or Tentacle 3.1. **How can I make Octopus/Tentacle use TLS 1.2 instead of TLS 1.0?** Octopus Server and Tentacle to 3.1 will use TLS 1.2 by default. **Tentacle 3.0** will still work with **Octopus 3.1**, but the communication protocol will fall back to the lowest-common-denominator of TLS 1.0. **Can I have a mixture of Tentacle 3.0 and 3.1? I'm not ready to upgrade some of my application servers.** Yes, you can have a mixture of **Tentacle 3.0** and **3.1** working happily with **Octopus 3.1**. We have committed to maintaining compatibility with the communication protocol. **If I keep running Tentacle 3.0 does that mean I won't get any of the new Octopus 3.1 deployment features?** The deployment features are handled by Calamari and Octopus Server makes sure all Tentacles have the latest Calamari. This means servers hosting **Tentacle 3.0** or **3.1** will get the new deployment features we shipped with **Octopus 3.1** by means of the latest Calamari. **Will you continue to support Windows Server 2003 or Windows Server 2008 SP1?** No, from **Octopus 3.1** onward, we are dropping official support for Octopus Server and Tentacle hosted on Windows Server 2003 or Windows Server 2008 SP1. :::div{.hint} **Tentacle communications protocol** Read more about the [Octopus - Tentacle communication](/docs/security/octopus-tentacle-communication/) protocol and [Troubleshooting Schannel and TLS](/docs/security/octopus-tentacle-communication/troubleshooting-schannel-and-tls). ::: ### Upgrading to Octopus 3.4 or greater See the [Release Notes](https://octopus.com/downloads/compare?from=3.3.27&to=3.4.0) for breaking changes and more information. **Using TeamCity NuGet feeds?** You will need to upgrade your TeamCity server to v9.0 or newer and [enable the NuGet v2 API](https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206817105-How-to-enable-NuGet-feed-v2). **Octopus 3.4**+ no longer supports the custom NuGet v1 feeds from TeamCity 7.x-8.x. We recommend upgrading to the latest TeamCity version available due to continual improvements in their NuGet feed - or switch to using the [Octopus built-in repository](/docs/packaging-applications/package-repositories). **Want to use SemVer 2 for packages or releases?** You will need to upgrade OctoPack and/or Octopus CLI to 3.4 or newer. ### Upgrading to Octopus 3.5 or greater Some server configuration values are moved from the config file into the database in 3.5+. If you are upgrading to a 3.5+ version, please backup your server config file prior to upgrading. If you need to downgrade, then replace the config with the original file after the downgrade and restart the Octopus Server. ## Troubleshooting In a few cases, a bug in a 3rd party component causes the installer to display a "Installation directory must be on a local hard drive" error. If this occurs, running the install again from an elevated command prompt using the following command (replacing Octopus.3.3.4-x64.msi with the name of the installer you are using): ``` msiexec /i Octopus.3.3.4-x64.msi WIXUI_DONTVALIDATEPATH="1" ``` # Managing space resources Source: https://octopus.com/docs/best-practices/platform-engineering/managing-space-resources.md [Serializing and deploying space level resources](https://www.youtube.com/watch?v=Hw4lnG7SqO8) Octopus is conceptually split into two types of resources: 1. Space level resources such as environments, feeds, accounts, lifecycles, certificates, workers, worker pools, and variable sets 2. Project level resources such as the projects themselves, the project deployment process, runbooks, project variables, and project triggers Space level resources are shared by projects and do not tend to change as frequently as projects. Managed, or downstream, spaces (i.e. spaces with centrally managed resources) are implemented by deploying space and project level resources as separate processes: - Space level resources are deployed first to support one or more projects - Project level resources are deployed second referencing the space level resources Space level resources are best managed with the [Octopus Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest/docs). :::div{.hint} [Config-as-code](/docs/projects/version-control) only supports persisting a subset of project settings in a Git repository, and can not be used to define space level resources. ::: Space level resources can be defined in a Terraform module in two ways: - Write the module by hand - Serialize an existing space to a Terraform module with [octoterra](https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport) ## Writing by hand You can write a Terraform module that manages Octopus space level resources by hand if you wish to do so. The Terraform provider source code contains a [suite of tests](https://github.com/OctopusDeployLabs/terraform-provider-octopusdeploy/tree/main/terraform) that can be used as examples for creating your own Terraform module. ## Serializing with octoterra The second approach is to create a management, or upstream, space using the Octopus UI and then export the space to a Terraform module with [octoterra](https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport). This allows you to rely on the UI for convenience and validation and then serialize the space to a Terraform module. :::div{.hint} You are free to edit the Terraform module created by octoterra as you see fit once it is exported. ::: Octopus includes a number of steps to help you serialize a space with octoterra and apply the module to a new space. :::div{.hint} The steps documented below are best run on the `Hosted Ubuntu` worker pools for Octopus Cloud customers. ::: ### Exporting space level resources The following process serializes a space to a Terraform module: 1. Create a project with a runbook called `__ 1. Serialize Space`. Runbooks with the prefix `__ ` (two underscores and a space) are automatically excluded when exporting projects, so this is a pattern we use to indicate runbooks that are involved in serializing Octopus resources but are not to be included in the exported module. 2. Add the `Octopus - Serialize Space to Terraform` step from the [community step template library](/docs/projects/community-step-templates). 1. Set the `Terraform Backend` field to the [backend](https://developer.hashicorp.com/terraform/language/settings/backends/configuration) configured in the exported module. The step defaults to `s3`, which uses an S3 bucket to store Terraform state. However, any backend provider can be defined here. 2. Set the `Octopus Server URL` field to the URL of the Octopus server to export a space from. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 3. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used when accessing the instance defined in the `Octopus Server URL` field. 4. Set the `Octopus Space ID` field to the ID of the space to be exported. The default value of `#{Octopus.Space.Id}` references the current space. 5. Set the `Octopus Upload Space ID` field to the ID of another space to upload the resulting Terraform module zip file to the built-in feed of that space. Leave this field blank to upload the zip file to the built-in feed of the current space. 6. Set the `Ignored Variables Sets` field to a comma separated list of variable sets to exclude from the Terraform module. Typically, this field is used when the values of the previous fields were sourced from a variable set that should not be exported. 7. Set the `Ignored Tenants` field to a comma separated list of tenants to exclude from the Terraform module. Typically, this is used to exclude tenants that are used to run this export step but do not make sense to reimport in a new space. 8. Tick the `Ignore All Targets` to exclude all [targets](/docs/infrastructure/deployment-targets) from the exported Terraform module. Targets are typically space specific and should not be shared between spaces. 9. Tick the `Default Secrets to Dummy Values` to set all secret values, such as account and feed passwords, to dummy values. This setting allows you to apply the resulting Terraform module without specifying any secret values, after which you can manually update the values in the new space as needed. If this value is not ticked, the resulting Terraform module exposes Terraform variables for every Octopus secret, and you must supply the secret values when applying the Terraform module. 10. Set the `Ignore Tenants with Tag` field to a tag, in the format `tag-set/tag-name`, which when applied to a tenant results in the tenant being excluded from the export. This is similar to the `Ignored Tenants` field, but allows you to ignore tenants based on their tags rather than by name. Executing the runbook will: - Export space level resources (i.e. everything but projects) to a Terraform module - Zip the resulting Terraform module files into a package named after the current space - Upload the zip file to the built-in feed of the current space, or the space defined in the `Octopus Upload Space ID` field The package has two directories: - `space_creation`, which contains a Terraform module to create a new space - `space_population`, which contains a Terraform module to populate a space with the exported resources. :::div{.hint} Many of the exported resources expose values, like resource names, as Terraform variables with default values. You can override these variables when applying the module to customize the resources, or leave the Terraform variables with their default value to recreate the resources with their original names. ::: ### Importing space level resources The following process creates and populates a space with the Terraform module exported using the process documented in the previous section: 1. Create a project with a runbook called `__ 2. Deploy Space`. Runbooks with the prefix `__ ` (two underscores and a space) are automatically excluded when exporting projects, so this is a pattern we use to indicate runbooks that are involved in serializing Octopus resources but are not to be included in the exported module. 2. Add one of the steps called `Octopus - Create Octoterra Space` from the [community step template library](/docs/projects/community-step-templates). Each step indicates the Terraform backend it supports. For example, the `Octopus - Create Octoterra Space (S3 Backend)` step configures a S3 Terraform backend. 1. Configure the step to run on a worker with a recent version of Terraform installed, or use the `octopuslabs/terraform-workertools` [container image](/docs/projects/steps/execution-containers-for-workers). 2. Set the `Octopus Space Name` field to the name of the new space. The default value of `#{Octopus.Deployment.Tenant.Name}` assumes the step is run against a tenant, and the name of the tenant is the name of the new space. 3. Set the `Octopus Space Managers` field to a comma separated list of [team](/docs/security/users-and-teams) IDs to assign as space managers. Built-in teams like `Octopus Administrator` have named IDs like `teams-administrators`. Custom teams have IDs like `Teams-15`. 4. Set the `Terraform Workspace` field to a [workspace](https://developer.hashicorp.com/terraform/language/state/workspaces) that tracks the new space. The default value of `#{OctoterraApply.Octopus.Space.NewName | Replace "[^A-Za-z0-9]" "_"}` creates a workspace name based on the space name with all non-alphanumeric characters replaced with an underscore. Leave the default value unless you have a specific reason to change it. 5. Select the package created by the export process in the previous section in the `Terraform Module Package` field. The package name is the same as the exported space name, with all non-alphanumeric characters replaced with an underscore. 6. Set the `Octopus Server URL` field to the URL of the Octopus server to create the new space in. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 7. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used when accessing the instance defined in the `Octopus Server URL` field. 8. Set the `Terraform Additional Apply Params` field to a list of additional arguments to pass to the `terraform apply` command. This field is typically used to define the value of any Terraform variables. However, there are no variables that need to be defined when creating a space, so leave this field blank unless you have a specific reason to pass an argument to Terraform. 9. Set the `Terraform Additional Init Params` field to a list of additional arguments to pass to the `terraform init` command. Leave this field blank unless you have a specific reason to pass an argument to Terraform. 10. Each `Octopus - Create Octoterra Space` step exposes values relating to their specific Terraform backend that must be configured. For example, the `Octopus - Create Octoterra Space (S3 Backend)` step exposes fields to configure the S3 bucket, key, and region where the Terraform state is saved. Other steps have similar fields. 3. Add one of the steps called `Octopus - Populate Octoterra Space` from the [community step template library](/docs/projects/community-step-templates). Each step indicates the Terraform backend it supports. For example, the `Octopus - Populate Octoterra Space (S3 Backend)` step configures a S3 Terraform backend. 1. Configure the step to run on a worker with a recent version of Terraform installed, or use the `octopuslabs/terraform-workertools` container image. 2. Set the `Terraform Workspace` field to a [workspace](https://developer.hashicorp.com/terraform/language/state/workspaces) that tracks the new space. The default value of `#{OctoterraApply.Octopus.SpaceID}` creates a workspace name based on the ID of the space that is being populated. Leave the default value unless you have a specific reason to change it. 3. Select the package created by the export process in the previous section in the `Terraform Module Package` field. The package name is the same as the exported space name, with all non-alphanumeric characters replaced with an underscore. 4. Set the `Octopus Server URL` field to the URL of the Octopus server to create the new space in. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 5. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used when accessing the instance defined in the `Octopus Server URL` field. 6. Set the `Octopus Space ID` field to the ID of the space created by the previous step. The ID is an output variable that can be access with an [octostache template](/docs/projects/variables/variable-substitutions) like `#{Octopus.Action[Octopus - Create Octoterra Space (S3 Backend)].Output.TerraformValueOutputs[octopus_space_id]}`. Note that the name of the previous step may need to be changed from `Octopus - Create Octoterra Space (S3 Backend)` if your step has a different name. 7. Set the `Terraform Additional Apply Params` field to a list of additional arguments to pass to the `terraform apply` command. This field is typically used to define the value of secrets such as account or feed passwords e.g. `-var=account_aws_account=TheAwsSecretKey`. 8. Set the `Terraform Additional Init Params` field to a list of additional arguments to pass to the `terraform init` command. Leave this field blank unless you have a specific reason to pass an argument to Terraform. 9. Each `Octopus - Populate Octoterra Space` step exposes values relating to their specific Terraform backend that must be configured. For example, the `Octopus - Populate Octoterra Space (S3 Backend)` step exposes fields to configure the S3 bucket, key, and region where the Terraform state is saved. Other steps have similar fields. Executing the runbook will create a new space and populate it with the space level resources defined in the Terraform module zip file created in the previous section. Typically, downstream spaces are represented by tenants in the upstream space. For example, the space called `Acme` is represented by a tenant wth the same name. Configuring the `__ 2. Deploy Space` runbook to run against a tenant allows you to manage the creation and updates of downstream spaces with a typical tenant based deployment process. This is why the `Octopus - Create Octoterra Space` step defaults the `Octopus Space Name` field to the name of the current tenant. :::div{.hint} If you ticked the `Default Secrets to Dummy Values` option when exporting a space, all resources with secret values like accounts, feeds, certificates, variables sets, and git credentials will have dummy values set for the passwords or secret values. You must manually update these values after the new space has been created to allow deployments and runbooks to work correctly. ::: ### Updating space level resources The runbooks `__ 1. Serialize Space` and `__ 2. Deploy Space` can be run as needed to serialize any changes to the upstream space and deploy the changes to downstream spaces. The Terraform module zip file pushed to the built-in feed is versioned with a unique value each time, so you can also revert changes by redeploying an older package. In this way you can use Octopus to deploy Octopus spaces using the same processes you use Octopus to deploy applications. # Immutable Infrastructure Source: https://octopus.com/docs/deployments/patterns/elastic-and-transient-environments/immutable-infrastructure.md This guide assumes familiarity with Octopus Deploy. If you don't already know how to set up projects, install Tentacles and configure basic deployment processes it may be helpful to review the [Getting Started pages](/docs/getting-started/) before beginning this guide. Familiarity with the concepts in [Elastic and Transient Environments](/docs/deployments/patterns/elastic-and-transient-environments) would be an added bonus. The features in [Elastic and Transient Environments](/docs/deployments/patterns/elastic-and-transient-environments) make it easier to deploy infrastructure in addition to applications. This guide focuses on deploying immutable infrastructure. Traditionally the infrastructure that hosts applications is mutable: it is constantly changing. The changes that infrastructure could experience include things like: new firewall rules, operating system updates and patches to your own deployed applications. Immutable infrastructure, as the name suggests, does not change after the initial configuration. In order to apply changes, a new version of the infrastructure is provisioned and the old infrastructure is terminated: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865664.png) ::: In this example we will create an infrastructure project and an application project. The infrastructure project will provision new Tentacles and terminate the old ones. The application project gets deployed to the Tentacles. We will then automate deploying our application to brand new infrastructure with each release. ## Machine policy {#machine-policy} The Tentacles provisioned in this guide belong to the **Immutable Infrastructure** machine policy. For now, create a new machine policy called **Immutable Infrastructure** and leave all settings at their default value. ## Application project {#application-project} For this demonstration, let's create a project called **Hello World** that will run a script echoing "Hello World" to each of our Tentacles. In practice, this would be the project that deploys your application to the Tentacles. 1. Create a project called **Hello World**. 2. Add a script step that outputs "Hello World" on each Tentacle: ## Infrastructure project {#infrastructure-project} The infrastructure project runs a script that provisions two new Tentacles and removes any old Tentacles in the environment we are deploying to. In practice this project would create your new infrastructure, add it to your load balancer and terminate your old infrastructure. 1. Download the [HelloWorldInfrastructure.1.0.0.0.zip](/docs/attachments/helloworldinfrastructure.1.0.0.0.zip) package that contains the scripts that run in this project and make any modifications required by your Octopus installation. 2. Upload the package to your Octopus package feed. 3. Install Tentacle on the same machine as your Octopus Server (there is no need to configure a Tentacle instance). 4. Create a project called **Hello World Infrastructure**. 5. Add a step that runs the script called **Provision.ps1** from the package **HelloWorldInfrastructure** on the Octopus Server. 6. Add a step that performs a health check, excluding unavailable machines from the deployment: ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865670.png) 7. Add a step that runs **Terminate.ps1** from the package **HelloWorldInfrastructure** on the Octopus Server on behalf of all [target tags](/docs/infrastructure/deployment-targets/target-tags). ## Intermission {#ImmutableInfrastructure-Intermission} At this stage you should be able to provision new Tentacles by creating a release of the **Hello World Infrastructure** project and deploying it to an environment. If you create another release of the project and deploy it, the Tentacles for the previous release will be stopped but will remain in the Octopus environment. You could also create and deploy a release of the **Hello World** project to your shiny new Tentacles, but it requires a lot of button clicking. ## Automating *all the things* {#automate-all-the-things} Imagine a developer makes a change to Hello World and would like to deploy it. At this stage, they would need to create and deploy a release of the Hello World Infrastructure project, wait for the new infrastructure to become available and then create and deploy a release of Hello World. It is possible but clunky. Also, someone would be required to remove all orphaned deployment targets left in Octopus when new Tentacles are provisioned. ### Cleaning machines {#cleaning-machines} Cleaning up old Tentacles can be accomplished through the use of machine policies. The **Immutable Infrastructure** machine policy that we created earlier can be edited so that it performs health checks more frequently, doesn't mind if machines are unavailable during that health check and automatically removes unavailable machines after a period of time. This is perfect for ensuring the Tentacles that we terminate are automatically cleaned up in a timely manner. 1. Edit the Immutable Infrastructure machine policy. 2. Change "Time between checks" to 2 minutes. 3. Select "Unavailable machines will not cause health checks to fail". 4. Select "Automatically delete unavailable machines". 5. Change "Time unavailable" to 5 minutes. :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865677.png) ::: ### Automatically deploying {#automatically-deploying} The **Hello World** project can be configured to automatically deploy when a new deployment target becomes available. Once this has been configured, any Tentacles created when **Hello World Infrastructure** is deployed will automatically receive the current successful deployment of the **Hello World** project. 1. Create a new trigger for the Hello World project. 2. Select the event "New deployment target becomes available". ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865666.png) Create and deploy a new release of **Hello World Infrastructure**. You should notice that immediately after new Tentacles are provisioned, **Hello World** is automatically deployed to those Tentacles: :::figure ![](/docs/img/deployments/patterns/elastic-and-transient-environments/images/5865678.png) ::: We are almost there! Next we need to bump the version of **Hello World** and automatically deploy it. ### Automatically deploying a new release {#automatically-deploying-new-release} Octopus will automatically deploy the current successful deployment for a project. That means if you deploy release 1.0.0 and then create release 1.0.1, the version 1.0.0 will continue to be deployed until 1.0.1 has been manually deployed. This is not ideal for immutable infrastructure, because we do not want to deploy 1.0.1 to our old infrastructure, so we have no way to indicate to Octopus that it should start deploying release 1.0.1. Enter auto deploy overrides. By creating both a new release and an auto deploy override when our infrastructure is provisioned, we can indicate to Octopus that the new release should be deployed to the new infrastructure. 1. Create an auto deploy override using the Octopus C# Client: ```powershell Add-Type -Path 'Octopus.Client.dll' $octopusURI = 'https://your-octopus-url' $apiKey = 'API-YOUR-KEY' $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURI, $apiKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $project = $repository.Projects.Get("Projects-1") $environment = $repository.Environments.Get("Environments-1") $release = $repository.Releases.Get("Releases-1") $project.AddAutoDeployReleaseOverride($environment, $release) $repository.Projects.Modify($project) ``` ## Magic {#ImmutableInfrastructure-Magic} Wouldn't it be amazing if a developer checked in some changes to **Hello World** and new immutable infrastructure was created with their changes on it? With a few lines of script, your build server can tell Octopus to automatically deploy new infrastructure and deploy the latest release of your project to that infrastructure when it comes online and becomes available to Octopus. Here is an example that could be adapted to your projects and build server: ```powershell Add-Type -Path 'Octopus.Client.dll' $octopusURI = "https://your-octopus-url" $apiKey = "API-YOUR-KEY" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURI, $apiKey octopus login --server $octopusURI --api-key $apiKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint octopus release create --project "Hello World Infrastructure" --package-version "1.0.0.0" $infraProject = $repository.Projects.FindByName("Hello World Infrastructure") $infraRelease = $repository.Projects.GetReleases($project).Items | Select-Object -first 1 octopus release deploy --project 'Hello World Infrastructure' --version $infraRelease.Version --environment 'Development' --no-prompt octopus release create --project "Hello World" $project = $repository.Projects.FindByName("Hello World") $release = $repository.Projects.GetReleases($project).Items | Select-Object -first 1 $environment = $repository.Environments.FindByName("Development") $project.AddAutoDeployReleaseOverride($environment, $release) $project.AddAutoDeployReleaseOverride($environment, $release) $repository.Projects.Modify($project) ``` ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Installing the Tentacle VM extension via PowerShell Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-powershell.md :::div{.problem} The VM extension is deprecated and no longer supported. All customers using the VM extension should migrate to [DSC](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). ::: The Azure VM Extension can be added to your virtual machine using the Azure PowerShell cmdlets. Refer to the [configuration structure](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/configuration-structure) for information regarding the format of the `publicSettings.json` and `privateSettings.json` files mentioned in these examples. ## Azure Service Management (ASM) mode \{#AzureVirtualMachines-AzureServiceManagement} To install the extension on a VM: ```powershell $vm = Get-AzureVm -Name "" -servicename "" $publicSettings = "{`"OctopusServerUrl`": `"https://octopus.example.com`", `"Environments`": [ `"Env1`", `"Env2`" ], `"Roles`": [ `"app-server`", `"web-server`" ], `"CommunicationMode`": `"Listen`", `"Port`": 10933 }" $privateSettings = "{`"ApiKey`": `"MY SECRET API KEY`"}" $publicSettings | Out-File "publicsettings.config" $privateSettings | Out-File "privatesettings.config" Write-Host "Setting extension" Set-AzureVmExtension ` -ExtensionName "OctopusDeployWindowsTentacle" ` -Publisher "OctopusDeploy.Tentacle" ` -Version "2.0" ` -PublicConfigPath "publicsettings.config" ` -PrivateConfigPath "privatesettings.config" ` -VM $vm | Update-AzureVM Write-Host "Adding endpoint to allow network traffic from the server to the Tentacle" # optionally add an endpoint to allow the Octopus Server to contact the Tentacle # not required if this is a polling Tentacle Add-AzureEndpoint -Name "OctopusTentacle" ` -Protocol "tcp" ` -PublicPort 10933 ` -LocalPort 10933 ` -VM $vm | Update-AzureVM ``` To find out what extensions are installed on a VM: ```powershell $vm = Get-AzureVm -Name "" -servicename "" Get-AzureVMExtension -VM $vm ``` To remove an extension from a VM: ```powershell $vm = Get-AzureVm -Name "" -servicename "" Remove-AzureVMExtension -VM $vm -ExtensionName "OctopusDeployWindowsTentacle" -Publisher "OctopusDeploy.Tentacle" ``` ## Azure Resource Manager (ARM) mode \{#AzureVirtualMachines-AzureResourceManager} To install the extension on a VM: ```powershell $publicSettings = @{ OctopusServerUrl = "https://octopus.example.com"; Environments = @("Env1", "Env2"); Roles = @("app-server", "web-server"); CommunicationMode = "Listen"; Port = 10933 } $privateSettings = @{"ApiKey" = ""} Set-AzureRmVMExtension -ResourceGroupName "" ` -Location "Australia East" ` -VMName "" ` -Name "OctopusDeployWindowsTentacle" ` -Publisher "OctopusDeploy.Tentacle" ` -TypeHandlerVersion "2.0" ` -Settings $publicSettings ` -ProtectedSettings $privateSettings ` -ExtensionType "OctopusDeployWindowsTentacle" # optional - add an NSG rule to allow the Octopus Server to contact the Tentacle # only required in Listening mode $vm = Get-AzureRmVm -Name "" -ResourceGroupName "" $nic = Get-AzureRmNetworkInterface -ResourceGroupName "" | ? { $_.VirtualMachine.Id -eq $vm.Id -and $_.Primary } $secGrp = Get-AzureRmNetworkSecurityGroup -ResourceGroupName "" | ? { $_.Id -eq $nic.NetworkSecurityGroup.Id } $secGrp | Add-AzureRmNetworkSecurityRuleConfig ` -Name "AllowTentacleInBound" ` -Description "Allow inbound traffic to Tentacle" ` -Protocol TCP ` -SourcePortRange "*" ` -SourceAddressPrefix "*" ` -DestinationPortRange 10933 ` -DestinationAddressPrefix "*" ` -Access Allow ` -Priority 999 ` -Direction Inbound $secGrp | Set-AzureRmNetworkSecurityGroup ``` To find out what extensions are installed on a VM: ```powershell Get-AzureRmVMExtension -ResourceGroupName "" ` -VMName "" ` -Name "OctopusDeployWindowsTentacle" ``` To remove an extension from a VM: ```powershell Remove-AzureRmVMExtension -ResourceGroupName "" ` -VMName "" ` -Name "OctopusDeployWindowsTentacle" ``` # octopus account aws list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-aws-list.md List AWS accounts in Octopus Deploy ```text Usage: octopus account aws list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account aws list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Octopus versioning scheme Source: https://octopus.com/docs/packaging-applications/package-repositories/docker-registries/octopus-version.md Docker image tags are used to define version information. However, aside from the convention that the `latest` tag indicates the default image available, Docker tags do not inherently define any relationship between "versions". It is common practice to assign meaningful versions to Docker tags, and when a Docker image is used in the context of an Octopus deployment, these versions are used to select the latest available image, and restrict images using version ranges. Starting with 2020.6, Octopus introduced a new, permissive versioning scheme for parsing Docker tags that allows almost any string to be interpreted as a comparable version. :::div{.hint} Prior to 2020.6, Octopus only recognized Docker image tags that complied with the Semantic Versioning standard. ::: The following [regular expression](https://oc.to/OctopusVersionRegex/) defines how Docker tags are parsed into version components: ``` ^(?:(?v|V)?(?\d+)(?:\.(?\d+))?(?:\.(?\d+))?(?:\.(?\d+))?)?(?:[.\-_])?(?(?[^+.\-_\s]*?)([.\-_](?[^+\s]*?)?)?)?(?:\+(?[^\s]*?))?$ ``` The version string can start with an optional "v" or "V". The four optional leading integers define the version major, minor, patch and release. These integers are separated by a dot. The prerelease label captures all characters, excluding the plus symbol, after an optional dot, dash or underscore separator. The metadata field captures any characters after a plus symbol. Note however that [Docker tags can not include the plus character](https://oc.to/DockerTags), and so can not define a metadata component. The metadata field has been defined for future use. This versioning scheme allows for traditional labels like `1.0` or `V1.2.3.4`. A string with no integer components, like `my-version`, is captured in the prerelease label, and assumed to have a major, minor, patch and revision of `0.0.0.0`. ## Examples | Label | Major | Minor | Patch | Revision | Prerelease | Note | |---|---|---|---|---|---|---| | 1.0 | 1 | 0 | 0 | 0 | | | | v1.0 | 1 | 0 | 0 | 0 | | | | V1.0 | 1 | 0 | 0 | 0 | | | | 1.0-myfeature | 1 | 0 | 0 | 0 | myfeature | | | 1.0myfeature | 1 | 0 | 0 | 0 | myfeature | The separator between the last integer version component and the prerelease label is optional. | | myfeature | 0 | 0 | 0 | 0 | myfeature | Integer version components are optional. | | latest | 0 | 0 | 0 | 0 | latest | The latest tag is considered to be a zero version with `latest` as the prerelease. | ## Version rules Docker image tags can be matched by a channel version rule. The NuGet version range syntax is applied to a Docker image, with the following caveats: * Any leading periods in the Docker tag prerelease label are treated as dashes. * The leading "v" is ignored. * Versions with no major, minor, patch or revision components are treated as version `0.0.0.0` (while retaining the prerelease label). | Version | Range | Notes | |---|---|---| | v1.0 | [1.0,2.0] | The leading "v" is ignored in the range. | | v1.0-.myfeature | [1.0--myfeature,2.0] | The leading dot in the prerelease field is represented as a dash in the range. | | myfeature | (,1.0) | The version `myfeature` is considered to be equivalent to `0.0.0.0-myfeature`. | Pre-release tag regular expressions can be used to limit the tags that are made available when creating a new release. A common use case is to exclude the `latest` tag, which can be achieved with a regular expression like `^(?!latest\b).+$|^$`. ![](/docs/img/packaging-applications/package-repositories/docker-registries/channel-rule.png) # Configuring Google Workspace Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-google-apps.md ## Configure Google Workspace [How to configure Google Workspace](/docs/security/authentication/googleapps-authentication#configure-google-workspace). ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** should be left unset. - **Username Claim Type** should be left unset. - **Resource** should be left unset. - **Scopes** should be left as the default of `openid profile email`. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer** should be `https://accounts.google.com`. - **Client ID** and **Client secret** should be the values you noted when creating the application. You can also find them in the "Additional Information" page of [the application](https://console.cloud.google.com/auth/clients). :::div{.hint} Note that the value of **Client Secret** cannot be retrieved once set - it can only be changed or deleted ::: - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. # How High Availability Works Source: https://octopus.com/docs/administration/high-availability/how-high-availability-works.md High Availability (HA) in Octopus enables you to run multiple Octopus Server nodes, distributing load between them. There are two kinds of load a Server node encounters: 1. Tasks (Deployments, runbook runs, health checks, package re-indexing, system integrity checks, etc.) 2. User Interface via the Web UI and REST API (Users, build server integrations, deployment target registrations, etc.) ## Tasks Deploying a release or triggering a runbook run places that work into a first-in, first-out (FIFO) task queue. You can view the task queue by navigating to the **TASKS** page on your Octopus Deploy instance. Deployments and runbook runs are not the only items placed into the task queue. Some other examples of tasks include (but are not limited to): - Apply Retention policies - Delete Space - Export Projects - Health check - Import Projects - Let's Encrypt - Process recurring scheduled tasks - Process subscriptions - Script Console Run - Sync Community Step templates - Sync External Security Groups - Tentacle upgrade - Update Calamari By default, each Octopus Deploy node is configured to process five (5) tasks concurrently. That is known as the task cap. It is possible to [change the task cap on each node](/docs/support/increase-the-octopus-server-task-cap). While it is possible to increase the task cap on a single node to 50, 75, or even 100, you'll eventually run into the underlying host OS and .NET limits. A server can only open up so many network connections, transport so many files, and run so many concurrent threads. High Availability solves that problem by scaling the task cap horizontally. Each node will pull items from the task queue and process them. ### How Tasks are distributed between HA nodes Every 20 seconds, each node in the HA cluster will check the task queue for pending tasks. When pending tasks are found, the node will start processing them. Octopus will do its best to balance the load between all the nodes to ensure one node doesn't process all the tasks while the other nodes remain unused. It does that by comparing the **current node workload ratio** with the **prospective cluster workload ratio**. The **current node workload ratio** is defined as `(active tasks on node / node limit)`. The **prospective cluster workload ratio** is defined as `(active cluster tasks + pending tasks / max cluster tasks)`. A node will not pick up pending tasks when: - The node is in drain mode (a toggle preventing the node from executing new tasks). - The number of active tasks is equal to or greater than the task cap. - The current node workload ratio is greater than the prospective cluster workload ratio. The last item in that list needs a deeper dive to understand. For this example, we have a cluster with three nodes, each with a task cap set to ten (10). Thus, making the HA cluster's total task capacity 30. There are 12 pending tasks in the queue and 10 active tasks. Let's assume all three nodes check the task queue at the same time. The prospective cluster workload ratio is **73.34%** (12 pending tasks + 10 active tasks / 30 task capacity). - One node is currently processing eight tasks making the current node workload ratio **80%** (8/10). While this node can pick up two more tasks, it will not because **80%** > **73.34%**. - One node is currently processing three tasks making the current node workload ratio **30%** (3/10). It can pick up seven more tasks but will only pick up five more tasks. It picks up tasks until its current node workload ratio is greater than the prospective cluster workload ratio. - One node is not processing any tasks. It will pick up all the remaining seven tasks. There are a few considerations this example did not take into account. - Not every node checks the task queue at the same time. The timer is based on the last time node was restarted. - 1 to N tasks could be added to the queue between node 1 checking the queue and nodes 2 and 3 check the queue. - 1 to N tasks could be completed between node 1 checking the queue and nodes 2 and 3 check the queue. ### First-in, first-out queue The task queue is a first-in, first-out (FIFO) queue. The node does not consider the task type, the expected duration of the task, the environment the task is configured to run on, or any other factors. Being a FIFO queue, combined with task distribution logic, this can result in one node processing all deployments while other nodes processing runbook runs or health checks. When that happens, it was just luck. ### Restarting the server during an active deployment Restarting the Octopus Deploy Windows service or the underlying host OS will (eventually) cause any active tasks to fail. At first, the tasks will look like they are still in process. Once the node comes back online, it will cancel all active tasks. If the node doesn't come back online within an hour, one of the other nodes will cancel those tasks. For planned outages, the recommendation is to enable drain mode. That will tell the node to finish up all active tasks and not pick up any new ones. That can be achieved by: 1. Navigating to **Configuration ➜ Nodes**. 2. Clicking on the overflow menu (`...`) next to the node you plan on restarting. 3. Selecting **Drain Node**. Once the outage is finished, repeat the same steps, but select **Disable Drain Node** instead. Not all outages can be planned. The underlying hypervisor hosting VM the node is running on could crash. A data center could go offline. When that happens you can use this [API Script](/docs/octopus-rest-api/examples/bulk-operations/rerun-deployments-and-runbooks-after-node-shutdown) to re-run those canceled deployments and runbook runs. ### Number of nodes We recommend a minimum of two nodes, each with 4 CPUs and 8 GB of RAM. In our testing, 4 CPUs and 8 GB of RAM can process between 20 and 30 concurrent tasks. When given the option, we recommend scaling horizontally over vertically. Assuming you'd want to be able to process 80 tasks concurrently. - If you had two nodes, each with a task cap of 40, an outage on one node would reduce your capacity by 50%. - If you had four nodes, each with a task cap of 20, an outage on one node would reduce your capacity by 25%. If you are hosting your virtual machines on a cloud provider, the cost difference between 2 VMs with 8 CPUs / 16 GB of RAM and 4 VMs with 4 CPUs / 8 GB of RAM is minimal. ## User Interface Octopus Deploy provides two main interfaces for our users. - Web Portal - REST API All other tools, including the CLI, build server plug-ins, etc., are wrappers around the REST API. All communication to the Web Portal or REST API occurs over standard web server ports, 80 or 443. Because of that, you will need a load balancer to distribute traffic between the nodes. # Minimize the data-migration time Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts/minimize-migration-time.md Migrating data from an **Octopus 2.6** backup file into an **Octopus 2018.10 LTS** instance can take a significant time to run (hours or even days in some cases). We strongly recommend taking the following actions to minimize the migration duration. ## Remove unnecessary data from your 2.6 instance Our strongest recommendation is to use [retention-policies](/docs/administration/retention-policies) in your **Octopus 2.6** instance to remove unnecessary data. The goal is for the document count in the 2.6 RavenDB to be as low as possible. You can find the document count by viewing the RavenDB studio through the Octopus Manager. The document count is in the footer of the RavenDB studio. Less than 150k documents is a rough guide, though obviously some customers will simply have more required data than this. :::div{.hint} The original complete backup can always be retained if it is required for audit purposes. ::: ## Limit historical data By default, we migrate everything from your backup including all historical data. You can use the `maxage=` argument when executing the migrator via the [command-line](/docs/octopus-rest-api/octopus.migrator.exe-command-line) to limit the number of days to keep. For example: `maxage=90` will keep 90 days of historical data ignoring anything older. ## RAM The migrator is a memory-hungry process. Allocate the machine which will execute the migrator process as much memory as possible. The more memory is available, the faster the process will run. As a rule-of-thumb: - If your .octobak file is >500MB, allow at least 16GB of RAM - If your .octobak file is >1GB, allow at least 32GB of RAM This RAM is only required for the migration, and can be deallocated once it is complete. ## No logs To minimize the initial migration time, you can skip migrating the server-task log files. This option is available as a check-box in the Octopus Manager, or can be supplied as a `--nologs` option if running via the [command-line](/docs/octopus-rest-api/octopus.migrator.exe-command-line). :::div{.hint} The logs can always be imported later using the `--onlylogs` option if required ::: # Upgrading from Octopus 4.x / 2018.x to latest version Source: https://octopus.com/docs/administration/upgrading/legacy/upgrading-from-octopus-4.x-2018.x-to-modern.md It is generally safe to do an in-place upgrade from Octopus Deploy 4.x/2018.x to the latest version. Please keep in mind, 4.x/2018.x did not include the Spaces feature. Spaces made the following changes: - The majority of endpoints in the API can accept a `Space-Id`, for example `/api/Spaces-1/projects?skip=0&take=100` whereas before it was `/api/projects?skip=0&take=100`. If a `Space-Id` isn't supplied, the default space is used. - Teams can be assigned to multiple roles and spaces. Before, a team could be assigned to only one role. - Unique internal package feed per space. Each space has a subfolder in the `Packages` directory to keep them segregated on the file system. Before, a package would be located at `C:\Octopus\packages\MyPackage.2020.1.1.zip`. Now it is `C:\Octopus\packages\Spaces-1\MyPackage.2020.1.1.zip` - Almost every table in the database had a `Space-Id` column added to it. The upgrade should work without error, but there are integration concerns to consider. This guide will step through the steps to mitigate those concerns. ## System Integrity Check Before performing any upgrade steps, we highly recommend performing a [System Integrity Check](/docs/administration/managing-infrastructure/diagnostics) on your live instance database. This is so we can check that the Database Schema is in the expected condition for the upgrade. If the integrity check passes, you are good to start the upgrade process. If it fails, please contact [support](https://octopus.com/support) with the [raw output of the task](/docs/support/get-the-raw-output-from-a-task), and we can get that fixed for you. ## Prep work Before starting the upgrade, it is critical to back up the master key and license key. If anything goes wrong, you might need these keys to do a restore. It is better to have the backup and not need it than need the backup and not have it. The master key doesn't change, while your license key changes, at most, once a year. Back them up once to a secure location and move onto the next steps. 1. Backup the Master Key. 1. Backup the License Key. ### Backup the Octopus Master Key Octopus Deploy uses the Master Key to encrypt and decrypt sensitive values in the Octopus Deploy database. The Master Key is securely stored on the server, not in the database. If the VM hosting Octopus Deploy is somehow destroyed or deleted, the Master Key goes with it. To view the Master Key, you will need login permissions on the server hosting Octopus Deploy. Once logged in, open up the Octopus Manager and click the view master key button on the left menu. :::figure ![](/docs/img/shared-content/upgrade/images/view-master-key.png) ::: Save the Master Key to a secure location, such as a password manager or a secret manager. An alternative means of accessing the Master Key is to run the `Octopus.Server.exe show-master-key` from the command line. Please note: you will need to be running as an administrator to do that. :::figure ![](/docs/img/shared-content/upgrade/images/master-key-command-prompt.png) ::: ### Backup the License Key Like the Master Key, the License Key is necessary to restore an existing Octopus Deploy instance. You can access the License Key by going to **Configuration ➜ License**. If you cannot access your License Key, please contact our [support team](https://octopus.com/support) and they can help you recover it. ## Standard upgrade process The standard upgrade process is an in-place upgrade. In-place upgrades update the binaries in the install directory and update the database. The guide below includes additional steps to backup key components to make it easier to rollback in the unlikely event of a failure. :::div{.problem} While an in-place upgrade will work, it involves risk as you are upgrading from a version released back in 2018. Please see the risk mitigation sections below for steps on how to mitigate that risk. ::: ### Overview The steps for this are: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Backup the database. 1. Do an in-place upgrade. 1. Test the upgraded instance. 1. Disable maintenance mode. ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Upgrade High Availability In general, upgrading a high available instance of Octopus Deploy follows the same steps as a typical in-place upgrade. Download the latest MSI and install that. The key difference is to upgrade only one node first, as this will upgrade the database, then upgrade all the remaining nodes. :::div{.warning} Attempting to upgrade all nodes at the same time will most likely lead to deadlocks in the database. ::: The process should look something like this: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Stop all the nodes. 1. Backup the database. 1. Select one node to upgrade, wait until finished. 1. Upgrade all remaining nodes. 1. Start all remaining stopped nodes. 1. Test upgraded instance. 1. Disable maintenance mode. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ## Risk mitigation recommended approach - create a test instance An in-place upgrade should be the safest approach. Upgrade scripts assume you are upgrading from older versions of Octopus Deploy. While the upgrade will work, there might be a new feature or breaking changes you will want to test first. The recommended approach is to create a test instance containing a subset of projects representing your main instance. Upgrade that test instance, verify it, and then upgrade the main instance. ### Overview The steps for this are: 1. Download the same version of Octopus Deploy as your main instance. 1. Install Octopus Deploy on a new VM. 1. Export a subset of projects from the main instance. 1. Import that subset of projects to the test instance. 1. Download the latest version of Octopus Deploy. 1. Backup the test instance database. 1. Upgrade that test instance to the latest version of Octopus Deploy. 1. Test and verify the test instance. 1. Enable maintenance mode on the main instance. 1. Backup the database on the main instance. 1. Backup all the folders on the main instance. 1. Do an in-place upgrade of your main instance. 1. Test upgraded main instance. 1. Disable maintenance mode. ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. After you install Octopus Deploy, the Octopus Manager will automatically launch. Follow the wizard. A few notes: 1. You can reuse your same license key on up to three unique instances of Octopus Deploy. We determine uniqueness based on the database it connects to. If you are going to exceed the three instance limit, please [contact us](https://octopus.com/support) to discuss your options. 1. Create a new database for this test instance. Restoring a backup will cause Octopus to treat this as a cloned instance, with the same targets, certificates, and keys. 1. Run the test instance database on the same version of SQL Server as the main instance. Only deviate when you plan on upgrading SQL Server. ### Export/import subset of projects using export/import projects feature The Export/Import Projects feature added in **Octopus Deploy 2021.1** can be used to export/import projects to a test instance. Please see the up to date [documentation](/docs/projects/export-import) to see what is included. ### Export subset of projects using the data migration tool All versions of Octopus Deploy since version 3.x has included a [data migration tool](/docs/administration/data/data-migration/). The Octopus Manager only allows for the migration of all the data. We only need a subset of data. Use the [partial export](/docs/octopus-rest-api/octopus.migrator.exe-command-line/partial-export) command-line option to export a subset of projects. Run this command for each project you wish to export on the main, or production, instance. Create a new folder per project: ``` Octopus.Migrator.exe partial-export --instance=OctopusServer --project=AcmeWebStore --password=5uper5ecret --directory=C:\Temp\AcmeWebStore --ignore-history --ignore-deployments --ignore-machines ``` :::div{.hint} This command ignores all deployment targets to prevent your test instance and your main instance from deploying to the same targets. ::: ### Import subset of projects using the data migration tool The data migration tool also includes [import functionality](/docs/octopus-rest-api/octopus.migrator.exe-command-line/import). First, copy all the project folders from the main instance to the test instance. Then run this command for each project: ``` Octopus.Migrator.exe import --instance=OctopusServer --password=5uper5ecret --directory=C:\Temp\AcmeWebStore ``` ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Backup the server folders The server folders store large binary data outside of the database. By default, the location is `C:\Octopus`. If you have High Availability configured, they will likely be stored on a NAS or some other file share. - **Packages**: The default location is `C:\Octopus\Packages\`. It stores all the packages in the internal feed. - **Artifacts**: The default location is `C:\Octopus\Artifacts`. It stores all the artifacts collected during a deployment along with project images. - **Tasklogs**: The default location is `C:\Octopus\Tasklogs`. It stores all the deployment logs. - **EventExports**: The default location is `C:\Octopus\EventExports`. It stores all the exported event audit logs. Any standard file-backup tool will work, even [RoboCopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy). Very rarely will an upgrade change these folders. The release notes will indicate if these folders are going to be modified. ### In-place upgrade of the main instance Upgrading your main Octopus Deploy instance should follow the same steps you did with the test or cloned instance. Don't forget to run your backups! ## Risk mitigation alternative approach - create a cloned instance An alternative approach to an in-place upgrade is to create a cloned instance and upgrade that. From there, you can migrate over to the cloned instance or do an in-place upgrade of your existing instance and use the cloned instance for testing future upgrades. ### Overview Creating a clone of an existing instance involves: 1. Enable maintenance mode on the main instance. 1. Backup the database of the main instance. 1. Disable maintenance mode on the main instance. 1. Restore the backup of the main instance's database as a new database on the desired SQL Server. 1. Download the same version of Octopus Deploy as your main instance. 1. Installing that version on a new server and configure it to point to the **cloned/restored** database. 1. Copying all the files from the backed up folders from the source instance. 1. Optional: Disable all deployment targets. 1. Upgrade cloned instance. 1. Test cloned instance. Verify all API scripts, CI integrations, and deployments work. 1. If migrating, then migrate over. Otherwise, leave the test instance alone, backup the folders and database, and upgrade the main instance. ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. Once the MSI is finished, the **Octopus Manager** will automatically launch. Follow the wizard, and on the section where you configure the database, select the pre-existing database. :::figure ![](/docs/img/shared-content/upgrade/images/select-existing-database.png) ::: Selecting an existing database will ask you to enter the Master Key. :::figure ![](/docs/img/shared-content/upgrade/images/enter-master-key.png) ::: Enter the Master Key you backed up earlier, and the manager will verify the connection works. Finish the wizard, keep an eye on each setting to ensure you match your main instance. For example, if your main instance uses Active Directory, your cloned instance should also be configured to use Active Directory. After the wizard is finished and the instance is configured, log in to the cloned instance to ensure your credentials still work. ### Copy all the files from the main instance After the instance has been created, copy all the contents from the following folders. - _Artifacts_, the default is `C:\Octopus\Artifacts` - _Packages_, the default is `C:\Octopus\Packages` - _Tasklogs_, the default is `C:\Octopus\Tasklogs` - _EventExports_, the default is `C:\Octopus\EventExports` Failure to copy over files will result in: - Empty deployment screens - Missing packages on the internal package feed - Missing project or tenant images - Missing archived events - And more ### Disabling all Targets/Workers/Triggers/Subscriptions - optional Cloning an instance includes cloning all certificates. Assuming you are not using polling Tentacles, all the deployments will "just work." That is by design if the VM hosting Octopus Deploy is lost and you have to restore Octopus Deploy from a backup. Just working does have a downside, as you might have triggers and other items configured. These items could potentially perform deployments. You can run this SQL Script on your cloned instance to disable everything. ```sql Use [OctopusDeploy] go DELETE FROM OctopusServerNode IF EXISTS (SELECT null FROM sys.tables WHERE name = 'OctopusServerNodeStatus') DELETE FROM OctopusServerNodeStatus UPDATE Subscription SET IsDisabled = 1 UPDATE ProjectTrigger SET IsDisabled = 1 UPDATE Machine SET IsDisabled = 1 IF EXISTS (SELECT null FROM sys.tables WHERE name = 'Worker') UPDATE Worker SET IsDisabled = 1 DELETE FROM ExtensionConfiguration WHERE Id in ('authentication-octopusid', 'jira-integration') ``` :::div{.hint} Remember to replace `OctopusDeploy` with the name of your database. ::: ### Octopus Deploy components Before performing an in-place upgrade, it is essential to note the various components of Octopus Deploy. Most in-place upgrades will only change the install location and the SQL Server database. Very rarely will an in-place upgrade change the home folder or server folders. The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The `Octopus Manager` invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The `Octopus Manager` stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until _after_ the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target.components. ### Install the newer version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Validation checks Octopus Deploy will perform validation checks before upgrading the database. These validation checks include (but are not limited to): - Verify the current license will work with the upgraded version. - Verify the current version of SQL Server is supported. If the validation checks fail, don't worry, install the [previously installed version of Octopus Deploy](https://octopus.com/downloads/previous), and you will be back up and running quickly. ### Database upgrades Each release of Octopus Deploy contains 0 to N database scripts to upgrade the database. The scripts are run in a transaction; when an error occurs, the transaction is rolled back. If a rollback does happen, gather the logs and send them to our [support team](https://octopus.com/support) for troubleshooting. You can install the previous version to get your CI/CD pipeline back up and running. If you use PAAS to host your Octopus Deploy database it is recommended to consider scaling up the database prior to the upgrade, especially if the upgrade spans a large version range and will therefore have an increased number of database scripts to run. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ### Migrating to a new instance It will be possible to run both the old and cloned instances side by side. Both of them can deploy to the same targets (assuming you are not using polling Tentacles). But there are a few items to keep in mind. - The Octopus Server is tightly coupled with Calamari. Deploying to the same target from both servers will result in Calamari getting upgraded/downgraded a lot. - The newer Octopus Server will prompt you to upgrade the Tentacles. While running both instances side by side, you will want to avoid this. - Unless the cloned instance has the same domain name, polling Tentacles will not connect to the cloned instance. A clone of the polling Tentacles might need to be created. - The thumbprints for certificates and other sensitive items are stored in the Octopus Deploy database. Cloning the database cloned those values. - **You must update the Installation ID on the cloned instance.** Cloning copies the Installation ID from the original, which means both instances will report [telemetry](/docs/security/outbound-requests/telemetry) under the same identifier. This corrupts usage data. See [Creating a test instance](/docs/administration/upgrading/guide/creating-test-instance) for the SQL script to generate a new Installation ID. ### Considerations As you migrate your instance, here are few items to consider. 1. Will the new instance's domain name be the same, or will it change? For example, will it change from `https://octopusdeploy.mydomain.com` to `https://octopus.mydomain.com`. If it changes and you are using polling Tentacles, you will need to create new Tentacle instances for the new Octopus Deploy instance. 2. What CI, or build servers, integrate with Octopus Deploy? Do those plug-ins need to be updated? You can find several of the plug-ins on the [downloads page](https://octopus.com/downloads). 3. Do you have any internally developed tools or scripts that invoke the Octopus API? We've done our best to maintain backward compatibility, but there might be some changes. 4. What components do you use the most? What does a testing plan look like? 5. Chances are there are new features and functionality you haven't been exposed to. How will you train people on the new functionality? If unsure, please [contact us](https://octopus.com/support) to get pointed in the right direction. ### Drift concerns While it is possible to run two instances side by side, each minute that passes, the two instances will drift further apart. Changes to the deployment process, new packages, new and releases deployments will be happening during this time. If you find yourself needing more time than a few days, a week tops, consider setting up a test instance. Or using this newly cloned instance as a test instance. Work out all the kinks on the test instance, then restart the cloning process on a fresh instance. :::div{.hint} If you are unsure how long the migration will take, consider setting up a test instance first. Work out all the kinks, then start the cloning process. ::: ### Polling Tentacles A Polling Tentacle can only connect to one Octopus Deploy instance. It connects via DNS name or IP address. If the new instance's DNS name changes - for example, the old instance was `https://octopusdeploy.mydomain.com` with the new instance set to `https://octopus.mydomain.com` - you'll need to clone each Polling Tentacle instance. Each Polling Tentacle will need to be cloned on each deployment target. To make things easier, we have provided [this script](https://github.com/OctopusDeployLabs/SpaceCloner/blob/master/CloneTentacleInstance.ps1) to help clone a Tentacle instance. That script will look at the source instance, determine the roles, environments, and tenants, then create a cloned Tentacle and register that cloned Tentacle with your cloned instance. :::div{.hint} Any script that clones a Tentacle instance must be run on the deployment target. It cannot be run on your development machine. ::: ### Executing the cutover Cutting over from the old instance to the new instance will require a bit of downtime and should be done off hours. 1. Enable maintenance mode on the old instance to put it into read-only mode. 1. Ensure all CI servers are pointing to the new instance (or change DNS). 1. You don't have to upgrade Tentacles right away. Newer versions of Octopus Deploy [can communicate with older versions of Tentacles](/docs/support/compatibility). You can upgrade a set at a time instead of upgrading everything, starting in 2020.x you can perform a search on the deployment target page and update only the returned Tentacles. Or, you can [upgrade Tentacles per environment](https://www.youtube.com/watch?v=KVxdSdYAqQU&t=352s). ### Backup the server folders The server folders store large binary data outside of the database. By default, the location is `C:\Octopus`. If you have High Availability configured, they will likely be stored on a NAS or some other file share. - **Packages**: The default location is `C:\Octopus\Packages\`. It stores all the packages in the internal feed. - **Artifacts**: The default location is `C:\Octopus\Artifacts`. It stores all the artifacts collected during a deployment along with project images. - **Tasklogs**: The default location is `C:\Octopus\Tasklogs`. It stores all the deployment logs. - **EventExports**: The default location is `C:\Octopus\EventExports`. It stores all the exported event audit logs. Any standard file-backup tool will work, even [RoboCopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy). Very rarely will an upgrade change these folders. The release notes will indicate if these folders are going to be modified. ### In-place upgrade of the main instance Upgrading your main Octopus Deploy instance should follow the same steps you did with the test or cloned instance. Don't forget to run your backups! ### Upgrade High Availability In general, upgrading a high available instance of Octopus Deploy follows the same steps as a typical in-place upgrade. Download the latest MSI and install that. The key difference is to upgrade only one node first, as this will upgrade the database, then upgrade all the remaining nodes. :::div{.warning} Attempting to upgrade all nodes at the same time will most likely lead to deadlocks in the database. ::: The process should look something like this: 1. Download the latest version of Octopus Deploy. 1. Enable maintenance mode. 1. Stop all the nodes. 1. Backup the database. 1. Select one node to upgrade, wait until finished. 1. Upgrade all remaining nodes. 1. Start all remaining stopped nodes. 1. Test upgraded instance. 1. Disable maintenance mode. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ## Rollback failed upgrade While unlikely, an upgrade may fail. It could fail on a database upgrade script, SQL Server version is no longer supported, license check validation, or plain old bad luck. Depending on what failed, you have a decision to make. If the cloned instance upgrade failed, it might make sense to start all over again. Or, it might make sense to roll back to a previous version. In either case, if you decide to roll back the process will be: 1. Restore the database backup. 1. Restore the folders. 1. Download and install the previously installed version of Octopus Deploy. 1. Do some sanity checks. 1. If maintenance mode is enabled, disable it. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Restore Octopus folders Octopus Deploy expects the artifacts, packages, tasklog, and event export folders to be in a specific format. The best chance of success is to: 1. Copy the existing folders to a safe location. 2. Delete the contents of the existing folders. 3. Copy the contents of the existing folders from the backup. 4. Once the rollback is complete, delete the copy from the first step. ### Find and download the previous version of Octopus Deploy Octopus Deploy stores the installation history in the database. Run this query on your Octopus Deploy database if unsure as to which version to download: ```sql SELECT TOP 5 [Version] FROM [dbo].[OctopusServerInstallationHistory] ORDER BY Installed desc ``` When you know the version to install, go to the [previous downloads page](https://octopus.com/downloads/previous). ### Installing the previous version The key configuration items, such as connection string, files, instance information, etc., are not stored in the install directory of Octopus Deploy. To install the previous version, first, uninstall Octopus Deploy. Uninstalling will only delete items from the install directory, or `C:\Program Files\Octopus Deploy\Octopus`. Then run the MSI to install the previous version. # Octopus Approvals Source: https://octopus.com/docs/approvals/octopus-approvals.md :::div{.hint} Octopus Approvals is currently in Alpha, available to a small set of customers. If you are interested in this feature please register your interest on the [roadmap card](https://roadmap.octopus.com/c/243-approvals-for-deployments) and we'll keep you updated. ::: ## Overview Octopus Approvals is a built-in change approval system for Octopus Deploy. Octopus blocks deployments and runbook runs until designated approvers sign off directly within Octopus. This means you don't need any external ITSM tools to manage your changes. When a controlled deployment or runbook run triggers, Octopus automatically creates a change request (with the format `OCT-{number}`) and pauses execution. Designated users or team members can then approve or reject the request. Once the minimum number of approvals is reached, Octopus allows execution to proceed. If any approver rejects the request, Octopus terminates the task. ## Getting started Enable Octopus Approvals on your Octopus instance by navigating to **Configuration ➜ Settings ➜ Octopus Approvals** and tick **Is Enabled** and save. Once Octopus Approvals is enabled, navigate to **Library ➜ Approvals ➜ Manage Approvals** to create your first approval policy, then configure scope to apply it to the relevant projects and environments. ## Configuring an approval policy Navigate to **Library ➜ Approvals ➜ Manage Approvals** and select **Add Approval Policy**. Each policy includes the following settings: - **Name**: A short, memorable, unique name for this approval policy. - **Description**: An optional description for this approval policy. - **Scope**: The projects and environments that this approval policy should apply to. Octopus will require approvals for deployments and runbook runs that match the selected project and environment combination. You can scope the approval policy by project and environment tags or individual project and environments. - **Approvers**: Select the Octopus teams or individual users who are authorized to approve change requests under this policy. Any member of an approving team counts toward the minimum approvers total. Octopus can optionally block the deployment creator from approving their own change request. Enable **Block approvals by the deployment creator** to enforce this separation of duties. - **Minimum approvers required** The number of approvals Octopus requires before allowing execution to proceed. If any approver rejects the change request before this threshold is reached, Octopus immediately terminates the task. ## How it works Octopus will generate a change request depending on the configured approval policies. If the required number of approvals is reached, the deployment will continue according to change windows. If the change request is rejected, the task is terminated. ### Change request creation When a deployment or runbook run triggers and it is in scope for an approval policy, Octopus automatically creates a change request with a unique reference number in the format `OCT-{number}` (for example, `OCT-42`). Octopus immediately pauses execution and displays the change request status in the task log. If multiple approval policies match, the policies are merged to a resultant policy. - Approvers are merged as a union of the approvers from each policy that has a matching scope. - The minimum approvers required will be equal to the highest value from all approval policies with matching scope. ### Change windows Octopus supports change windows. Change windows are scheduled time periods during which a deployment is allowed to run. If a change request is approved but the change window has not yet opened, Octopus keeps execution paused. If the change window closes before the deployment runs (whether the request is approved or still pending), Octopus terminates the task. ### Rejection If any designated approver rejects the change request, Octopus immediately terminates the task. You cannot retry a rejected task; you must trigger a deployment of a new release or runbook run, which will create a fresh change request. ## Reviewing change requests Octopus surfaces change requests in several places so approvers can act on them without leaving their current context. ### Approvals Navigate to **Library ➜ Approvals** for a complete list of all change requests. The list is divided into three tabs: - **Needs Approval**: Change requests that are still pending the required number of approvals. - **Completed**: Change requests that have been approved or rejected. - **All**: All change requests regardless of state. Each row shows the **Change Request** number (as a link). Select the change request link to open the **Review change request** page, where you can see the full approval details and submit your approval or rejection. ### Tasks Page Navigate to **Tasks** and select the **Needs Approval** tab for a filtered view of all tasks currently waiting on an approval. If the task is waiting for an Octopus Approval, the row will have a button to review the change request associated with this task. Select **Review** to open the drawer to view the change request details and submit your approval or rejection. ### Deployment or Runbook Run Page When a deployment or runbook run is blocked on an Octopus Approval, a warning callout appears at the top of the task page: > **Approval needed to continue this deployment** > This deployment is blocked by change request OCT-n and requires approval from N approvers. Select **Review** to open the drawer to view the change request details and submit your approval or rejection. ### Release Page When viewing a release, under **Progression** you will see a list of deployments to the environments in your lifecycle and lifecycle phases. If a deployment to an environment is blocked on an Octopus Approval, the environment will have a button to review the change request associated with this task. Select **Review** to open the drawer to view the change request details and submit your approval or rejection. # Managing project resources Source: https://octopus.com/docs/best-practices/platform-engineering/managing-project-resources.md [Serializing and deploying project level resources](https://www.youtube.com/watch?v=QIcq2WxnrPs) Octopus is conceptually split into two types of resources: 1. Space level resources such as environments, feeds, accounts, lifecycles, certificates, workers, worker pools, and variable sets 2. Project level resources such as the projects themselves, the project deployment process, runbooks, project variables, and project triggers Space level resources are shared by projects and do not tend to change as frequently as projects. Managed, or downstream, spaces (i.e. spaces with centrally managed resources) are implemented by deploying space and project level resources as separate processes: - Space level resources are deployed first to support one or more projects - Project level resources are deployed second referencing the space level resources There are two ways to manage project level resources: - Define database backed projects, complete with all deployment steps, with Terraform - Define the configuration of a [Config-as-code](/docs/projects/version-control) (CaC) project with Terraform, while deferring the configuration of CaC managed settings like the deployment process, non-secret variables, and some project settings to configuration stored in Git Defining database backed projects in Terraform is useful for [centralized responsibility](/docs/platform-engineering/levels-of-responsibility) projects where the customer has little or no ability to modify the project, or [customer responsibility](/docs/platform-engineering/levels-of-responsibility) projects where projects are not centrally updated after they are created. Defining CaC projects is useful for [shared responsibility](/docs/platform-engineering/levels-of-responsibility) projects where deployment processes can be modified by customers and the platform team, with differences reconciled with Git merges. Project level resources can be defined in a Terraform module in two ways: - Write the module by hand - Serialize an existing project to a Terraform module with [octoterra](https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport) ## Writing by hand Projects can be defined in a Terraform module by hand. The Terraform provider has [tests](https://github.com/OctopusDeployLabs/terraform-provider-octopusdeploy/tree/main/terraform) that can be used as examples for creating your own Terraform module. However, Octopus steps are configured with key/value pairs defined in a property bag. These values are not documented, and the only way to find which combination of values work for a step is to first create the step in the Octopus UI and export the step to JSON: ![Download as JSON](/docs/img/platform-engineering/export-to-json.png) The resulting JSON file looks something like this, where the `Steps[].Actions[].Properties` field defines the property bag: ```json { "Id": "deploymentprocess-Projects-5222", "SpaceId": "Spaces-1913", "ProjectId": "Projects-5222", "Version": 1, "Steps": [ { "Id": "4ce3b678-a928-4456-9af0-6afd741863c0", "Name": "Deploy Container", "Slug": "deploy-container", "PackageRequirement": "LetOctopusDecide", "Properties": { "Octopus.Action.TargetRoles": "EKS_Reference_Cluster" }, "Condition": "Success", "StartTrigger": "StartAfterPrevious", "Actions": [ { "Id": "44a23dd7-c320-4836-9ecb-5530a670c1f2", "Name": "Deploy Container", "Slug": "deploy-container", "ActionType": "Octopus.KubernetesDeployContainers", "Notes": null, "IsDisabled": false, "CanBeUsedForProjectVersioning": true, "IsRequired": false, "WorkerPoolId": "WorkerPools-2259", "Container": { "Image": "octopuslabs/k8s-workertools", "FeedId": "Feeds-3533" }, "WorkerPoolVariable": null, "Environments": [ "Environments-2584", "Environments-2582", "Environments-2581" ], "ExcludedEnvironments": [], "Channels": [], "TenantTags": [], "Packages": [ { "Id": "4c88ac9a-3639-4047-9d9d-38adf7949fdb", "Name": "web", "PackageId": "#{Kubernetes.Deployment.Image}", "FeedId": "#{Kubernetes.Deployment.Feed}", "AcquisitionLocation": "NotAcquired", "Properties": { "Extract": "False", "PackageParameterName": "", "SelectionMode": "immediate" } } ], "GitDependencies": [], "Condition": "Success", "Properties": { "Octopus.Action.EnabledFeatures": "Octopus.Features.KubernetesService,Octopus.Features.KubernetesIngress,Octopus.Features.KubernetesConfigMap,Octopus.Features.KubernetesSecret", "Octopus.Action.Kubernetes.DeploymentTimeout": "180", "Octopus.Action.Kubernetes.ResourceStatusCheck": "True", "Octopus.Action.KubernetesContainers.Containers": "[{\"Args\":[],\"Command\":[],\"ConfigMapEnvFromSource\":[],\"ConfigMapEnvironmentVariables\":[],\"CreateFeedSecrets\":\"False\",\"EnvironmentVariables\":[{\"key\":\"PORT\",\"keyError\":null,\"option\":\"\",\"option2\":\"\",\"option2Error\":null,\"optionError\":null,\"value\":\"#{Kubernetes.Deployment.Port}\",\"valueError\":null}],\"FieldRefEnvironmentVariables\":[],\"Lifecycle\":{\"PostStart\":null,\"PreStop\":null},\"LivenessProbe\":{\"exec\":{\"command\":[]},\"failureThreshold\":\"\",\"httpGet\":{\"host\":\"\",\"httpHeaders\":[],\"path\":\"\",\"port\":\"\",\"scheme\":\"\"},\"initialDelaySeconds\":\"\",\"periodSeconds\":\"\",\"successThreshold\":\"\",\"tcpSocket\":{\"host\":\"\",\"port\":\"\"},\"timeoutSeconds\":\"\",\"type\":\"\"},\"Name\":\"web\",\"Ports\":[{\"key\":\"web\",\"keyError\":null,\"option\":\"TCP\",\"option2\":\"\",\"option2Error\":null,\"optionError\":null,\"value\":\"#{Kubernetes.Deployment.Port}\",\"valueError\":null}],\"ReadinessProbe\":{\"exec\":{\"command\":[]},\"failureThreshold\":\"\",\"httpGet\":{\"host\":\"\",\"httpHeaders\":[],\"path\":\"\",\"port\":\"\",\"scheme\":\"\"},\"initialDelaySeconds\":\"\",\"periodSeconds\":\"\",\"successThreshold\":\"\",\"tcpSocket\":{\"host\":\"\",\"port\":\"\"},\"timeoutSeconds\":\"\",\"type\":\"\"},\"Resources\":{\"limits\":{\"amdGpu\":\"\",\"cpu\":\"\",\"ephemeralStorage\":\"\",\"memory\":\"\",\"nvidiaGpu\":\"\",\"storage\":\"\"},\"requests\":{\"amdGpu\":\"\",\"cpu\":\"\",\"ephemeralStorage\":\"\",\"memory\":\"\",\"nvidiaGpu\":\"\",\"storage\":\"\"}},\"SecretEnvFromSource\":[],\"SecretEnvironmentVariables\":[],\"SecurityContext\":{\"allowPrivilegeEscalation\":\"\",\"capabilities\":{\"add\":[],\"drop\":[\"ALL\"]},\"privileged\":\"\",\"readOnlyRootFilesystem\":\"\",\"runAsGroup\":\"\",\"runAsNonRoot\":\"True\",\"runAsUser\":\"\",\"seLinuxOptions\":{\"level\":\"\",\"role\":\"\",\"type\":\"\",\"user\":\"\"}},\"StartupProbe\":{\"exec\":{\"command\":[]},\"failureThreshold\":\"\",\"httpGet\":{\"host\":\"\",\"httpHeaders\":[],\"path\":\"\",\"port\":\"\",\"scheme\":\"\"},\"initialDelaySeconds\":\"\",\"periodSeconds\":\"\",\"successThreshold\":\"\",\"tcpSocket\":{\"host\":\"\",\"port\":\"\"},\"timeoutSeconds\":\"\",\"type\":\"\"},\"TerminationMessagePath\":\"\",\"TerminationMessagePolicy\":\"\",\"VolumeMounts\":[]}]", "Octopus.Action.KubernetesContainers.DeploymentName": "#{Kubernetes.Deployment.Name}", "Octopus.Action.KubernetesContainers.DeploymentResourceType": "Deployment", "Octopus.Action.KubernetesContainers.DeploymentStyle": "RollingUpdate", "Octopus.Action.KubernetesContainers.IngressAnnotations": "[{\"key\":\"nginx.ingress.kubernetes.io/rewrite-target\",\"keyError\":null,\"option\":\"\",\"option2\":\"\",\"option2Error\":null,\"optionError\":null,\"value\":\"$1$2\",\"valueError\":null},{\"key\":\"nginx.ingress.kubernetes.io/use-regex\",\"keyError\":null,\"option\":\"\",\"option2\":\"\",\"option2Error\":null,\"optionError\":null,\"value\":\"true\",\"valueError\":null}]", "Octopus.Action.KubernetesContainers.IngressClassName": "nginx", "Octopus.Action.KubernetesContainers.IngressName": "#{Kubernetes.Ingress.Name}", "Octopus.Action.KubernetesContainers.IngressRules": "[{\"host\":\"\",\"http\":{\"paths\":[{\"key\":\"#{Kubernetes.Ingress.Path}\",\"option\":\"\",\"option2\":\"ImplementationSpecific\",\"value\":\"web\"}]}}]", "Octopus.Action.KubernetesContainers.PodManagementPolicy": "OrderedReady", "Octopus.Action.KubernetesContainers.Replicas": "1", "Octopus.Action.KubernetesContainers.ServiceName": "#{Kubernetes.Service.Name}", "Octopus.Action.KubernetesContainers.ServiceNameType": "External", "Octopus.Action.KubernetesContainers.ServicePorts": "[{\"name\":\"web\",\"nodePort\":\"\",\"port\":\"80\",\"protocol\":\"TCP\",\"targetPort\":\"web\"}]", "Octopus.Action.KubernetesContainers.ServiceType": "ClusterIP", "Octopus.Action.RunOnServer": "true", "OctopusUseBundledTooling": "False" }, "Links": {} } ] } ], "LastSnapshotId": null, "Links": { "Self": "/api/Spaces-1913/projects/Projects-5222/deploymentprocesses", "Project": "/api/Spaces-1913/projects/Projects-5222", "Template": "/api/Spaces-1913/projects/Projects-5222/deploymentprocesses/template{?channel,releaseId}", "Validation": "/api/Spaces-1913/projects/Projects-5222/deploymentprocesses/validate" } } ``` It is up to you to copy each of the properties into the Terraform resource that defines the deployment process or runbook steps. ## Serializing with octoterra The second approach is to create a management, or upstream, project using the Octopus UI and then export projects to Terraform modules with [octoterra](https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport). This allows you to rely on the UI for convenience and validation and then serialize the project to a Terraform module. :::div{.hint} You are free to edit the Terraform module created by octoterra as you see fit once it is exported. ::: Octopus includes a number of steps to help you serialize a project with octoterra and apply the module to a new space. :::div{.hint} The steps documented below are best run on the `Hosted Ubuntu` worker pools for Octopus Cloud customers. ::: 1. Create a project with a runbook called `__ 1. Serialize Project`. Runbooks with the prefix `__ ` (two underscores and a space) are automatically excluded when exporting projects, so this is a pattern we use to indicate runbooks that are involved in serializing Octopus resources but are not to be included in the exported module. 2. Add the `Octopus - Serialize Project to Terraform` step from the [community step template library](/docs/projects/community-step-templates). 1. Tick the `Ignore All Changes` option to instruct Terraform to ignore any changes made to a project through the UI using the [lifecycle meta-argument](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle). This option is most useful when [RBAC controls](/docs/getting-started/best-practices/users-roles-and-teams) allow customers to edit the variables of a project managed by Terraform but not edit the project steps or other settings. This allows platform teams to treat entire projects much like [step templates](/docs/projects/custom-step-templates), where end users can edit parameters but not touch the configuration of the steps, but in this case the project variables can be edited but the project steps can not. 2. Set the `Terraform Backend` field to the [backend](https://developer.hashicorp.com/terraform/language/settings/backends/configuration) configured in the exported module. The step defaults to `s3`, which uses an S3 bucket to store Terraform state. However, any backend provider can be defined here. 3. Set the `Octopus Server URL` field to the URL of the Octopus server to export a space from. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 4. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used to access the instance defined in the `Octopus Server URL` field. 5. Set the `Octopus Space ID` field to the ID of the space to be exported. The default value of `#{Octopus.Space.Id}` references the current space. 6. Set the `Octopus Project Name` field to the name of the project to serialize. The default value of `#{Octopus.Project.Name}` assumes the runbook has been defined in the same project that is being exported. 7. Set the `Octopus Upload Space ID` field to the ID of another space to upload the resulting Terraform module zip file to the built-in feed of that space. Leave this field blank to upload the zip file to the built-in feed of the current space. 8. Set the `Ignored Variables Sets` field to a comma separated list of variable sets to exclude from the Terraform module. Typically, this field is used when the values of the previous fields were sourced from a variable set that should not be exported. Executing the runbook will: - Export the project to a Terraform module - Zip the resulting files - Upload the zip file to the built-in feed of the current space or the space defined in the `Octopus Upload Space ID` field The zip file has one directory called `space_population` which contains a Terraform module to populate a space with the exported resources. :::div{.hint} Many of the exported resources expose values, like resource names, as Terraform variables with default values. You can override these variables when applying the module to customize the resources, or leave the Terraform variables with their default value to recreate the resources with their original names. ::: ### Importing a project The following steps create a project in an existing space with the Terraform module exported using the instructions from the previous step: 1. Create a project with a runbook called `__ 2. Deploy Project`. Runbooks with the prefix `__ ` (two underscores and a space) are automatically excluded when exporting projects, so this is a pattern we use to indicate runbooks that are involved in serializing Octopus resources but are not to be included in the exported module. 2. Add one of the steps called `Octopus - Populate Octoterra Space` from the [community step template library](/docs/projects/community-step-templates). Each step indicates the Terraform backend it supports. For example, the `Octopus - Populate Octoterra Space (S3 Backend)` step configures a S3 Terraform backend. 1. Configure the step to run on a worker with a recent version of Terraform installed, or use the `octopuslabs/terraform-workertools` container image. 2. Set the `Terraform Workspace` field to a [workspace](https://developer.hashicorp.com/terraform/language/state/workspaces) that maintains the state of Octopus resources created by Terraform. The default value of `#{OctoterraApply.Octopus.SpaceID}` uses a workspace based on the ID of the space that is being populated. Leave the default value unless you have a specific reason to change it. 3. Select the package created by the export process in the previous section in the `Terraform Module Package` field. The package name is the same as the exported project name, with all non-alphanumeric characters replaced with an underscore. 4. Set the `Octopus Server URL` field to the URL of the Octopus server to create the new project in. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 5. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used when accessing the instance defined in the `Octopus Server URL` field. 6. Set the `Octopus Space ID` field to the ID of an existing space where the project will be created. 7. Set the `Terraform Additional Apply Params` field to a list of additional arguments to pass to the `terraform apply` command. This field is typically used to define the value of secrets such as secret variables e.g. `-var=eks_octopub_frontend_my_secret_1=TheSecretValue`. It is also useful to override the Git repository for a CaC enabled project, as [projects can not share Git repositories](/docs/projects/version-control/config-as-code-reference) e.g. `-var=project_frontend_webapp_git_url=http://github.com/username/project`. 8. Set the `Terraform Additional Init Params` field to a list of additional arguments to pass to the `terraform init` command. Leave this field blank unless you have a specific reason to pass an argument to Terraform. 9. Each `Octopus - Populate Octoterra Space` step exposes values relating to their specific Terraform backend that must be configured. For example, the `Octopus - Populate Octoterra Space (S3 Backend)` step exposes fields to configure the S3 bucket, key, and region where the Terraform state is saved. Other steps have similar fields. Typically, downstream spaces are represented by tenants in the upstream space. For example, the space called `Acme` is represented by a tenant wth the same name. Configuring the `__ 2. Deploy Project` runbook to run against a tenant allows you to manage the creation and updates of downstream projects with a typical tenant based deployment process. To resolve a downstream space with the name of a tenant to its ID, as required by the `Octopus - Populate Octoterra Space` step, you can use the `Octopus - Lookup Space ID` step from the [community step template library](/docs/projects/community-step-templates). To use the `Octopus - Lookup Space ID` step, add it before the `Octopus - Populate Octoterra Space` step and then reference the space ID as an output variable with an octostache template like `#{Octopus.Action[Octopus - Lookup Space ID].Output.SpaceID}`. Executing the runbook will create a new project in an existing space. Any space level resources referenced by the project are resolved by the resource name using Terraform [data sources](https://developer.hashicorp.com/terraform/language/data-sources), so the project can be imported into any space with the correctly named space level resources. ### Updating project resources The runbooks `__ 1. Serialize Project` and `__ 2. Deploy Project` can be run as needed to serialize any changes to the upstream project and deploy the changes to downstream projects. The Terraform module zip file pushed to the built-in feed is versioned with a unique value each time, so you can also revert changes by redeploying an older package. In this way you can use Octopus to deploy Octopus projects using the same processes you use Octopus to deploy applications. # Varying Azure subscription by environment with Octopus Source: https://octopus.com/docs/deployments/azure/varying-azure-subscription-by-environment.md You may want to use a different Azure subscription depending on which environment you are targeting. This can be achieved by binding the account field to an Octopus variable: 1. Add an [Azure Subscription Account](/docs/infrastructure/accounts/azure) to Octopus. * If you want to use the Account ID in your variable, open the account you just added from **Deploy ➜ Manage ➜ Accounts ➜ [Account name]** and copy the account ID from the URL. ![Account Id](/docs/img/deployments/azure/images/varying-account-id.png) ​ The Account ID is the value after the last `/` in the URL. 2. Create a variable in your project and set the Account ID or Account Name as its value. Make sure to scope this variable to the Environment/Target tag/Target where you'll be using it. ![variable](/docs/img/deployments/azure/images/varying-variable.png) 3. If you are deploying an **Azure Web App**, you will need to create an [Azure Web App Target](/docs/deployments/azure/deploying-a-package-to-an-azure-web-app) for each environment. If you are deploying an **Azure Cloud Service**, you will need to create an [Azure Cloud Service Target](/docs/infrastructure/deployment-targets/azure/cloud-service-targets) for each environment. 4. Once you start the deployment, Octopus will resolve the variables that hold the Account and WebApp/Cloud Service info based on their scope. To use a different account, repeat steps 1-3 and scope the new account variable accordingly. ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # Rollback .NET Application on Windows Server Source: https://octopus.com/docs/deployments/patterns/rollbacks/dotnet-windows-rollbacks.md This guide will walk through rolling back .NET Windows Services and .NET Web Applications hosted on IIS. It will use the [OctoFX Sample Application](https://github.com/OctopusSamples/OctoFX). That application has three components: - Database - Windows Service - Website Rolling back a database is out of the scope of this guide. As stated in this [article](https://octopus.com/blog/database-rollbacks-pitfalls), rolling back a database schema change could result in wrong or deleted data. This guide focuses on scenarios where there were no database changes or the database changes are backward compatible. Because the database changes are out of scope for rollbacks, the database packages will be "skipped" during the rollback process. ## Existing deployment process For this guide, we will start with the following deployment process for the OctoFX application: 1. Run Database Creation Runbook 1. Deploy the OctoFX database 1. Deploy the OctoFX Windows Service 1. Deploy the OctoFX website 1. Verify the application 1. Notify stakeholders :::figure ![original Windows deployment process](/docs/img/deployments/patterns/rollbacks/dotnet-windows-rollbacks/images/original-windows-deployment-process.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/01-octofx-original/deployments/process). Please log in as a guest. ::: ## Zero configuration rollback The easiest way to rollback to a previous version is to: 1. Find the release you want to roll back. 2. Click the **REDEPLOY** button next to the environment you want to roll back. That redeployment will work because a snapshot is taken when you create a release. The snapshot includes: - Deployment Process - Project Variables - Referenced Variables Sets - Package Versions Re-deploying the previous release will re-run the deployment process as it existed when that release was created. By default, the deploy package steps (such as deploy to IIS or deploy a Windows Service) will extract to a new folder each time a deployment is run, perform the [configuration transforms](/docs/projects/steps/configuration-features/structured-configuration-variables-feature/), and [run any scripts embedded in the package](/docs/deployments/custom-scripts/scripts-in-packages). :::div{.hint} Zero Configuration Rollbacks should work for most our customers. However, your deployment process might need a bit more fine-tuning. The rest of this guide is focused on disabling specific steps during a rollback process. ::: ## Simple rollback process The typical rollback strategy is to skip specific steps and run additional ones during a rollback. In this example, the database steps will be skipped with another step to [prevent that release from progressing](/docs/releases/prevent-release-progression) will run during a rollback. The updated deployment process will be: 1. Calculate Deployment Mode 1. Run Database Creation Runbook (skip during rollback) 1. Deploy the OctoFX Database (skip during rollback) 1. Deploy the OctoFX Windows Service 1. Deploy the OctoFX Website 1. Block Release Progression (only run during rollback) 1. Verify the Application 1. Notify stakeholders :::figure ![simple rollback for Windows deployment](/docs/img/deployments/patterns/rollbacks/dotnet-windows-rollbacks/images/windows-simple-rollback-process.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/02-octofx-simple-rollback/deployments/process). Please log in as a guest. ::: ### Calculate deployment mode Calculate Deployment Mode is a [community step template](https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5/actiontemplate-calculate-deployment-mode) created by Octopus Deploy. It compares the release number being deployed with the current release number for the environment. When the release number is greater than the current release number, it is a deployment. When it is less, then it is a rollback. The step templates sets a number of [output variables](/docs/projects/variables/output-variables), including ones you can use in variable run conditions. ### Skip database deployment steps The two steps related to database deployments, Run Database Creation Runbook and Deploy OctoFX Database, should be skipped during a rollback. Unlike code, databases cannot easily be rolled back without risking data loss. For most rollbacks, you won't have database changes. However, a rollback could accidentally be triggered with a database change. For example, rolling back a change in **Test** to unblock the QA team. Skipping these steps during the rollback reduces the chance of accidental data loss. To skip these steps during a rollback, set the variable run condition to be: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnDeploy} ``` We also recommend adding or updating the notes field to indicate it will only run on deployments. :::figure ![Windows updating notes field](/docs/img/deployments/patterns/rollbacks/dotnet-windows-rollbacks/images/windows-updating-notes-field.png) ::: ### Prevent release progression Blocking Release Progression is an optional step to add to your rollback process. [The Block Release Progression](https://library.octopus.com/step-templates/78a182b3-5369-4e13-9292-b7f991295ad1/actiontemplate-block-release-progression) step template uses the API to [prevent the rolled back release from progressing](/docs/releases/prevent-release-progression). This step includes the following parameters: - Octopus Url: #{Octopus.Web.BaseUrl} (default value) - Octopus API Key: API Key with permissions to block releases - Release Id to Block: #{Octopus.Release.CurrentForEnvironment.Id} (default value) - Reason: This can be pulled from a manual intervention step or set to `Rolling back to #{Octopus.Release.Number}` This step will only run on a rollback; set the run condition for this step to: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` To unblock that release, go to the release page and click the **UNBLOCK** button. ## Complex rollback process As mentioned earlier, re-deploying the website and Windows service involves re-extracting the package, running configuration transforms, and any embedded scripts. Generally, those steps will finish within 60 seconds. However, re-deploying those packages carries a small amount of risk because variable snapshots can be updated. Or, the embedded scripts are complex and take time to finish. By default, Octopus Deploy will keep all releases on your Windows Server (this can be changed via [retention policies](/docs/administration/retention-policies)), which means the previously extracted and configured Windows Service or Website already exists. Back in Octopus 3.x we added the system variable `Octopus.Action.Package.SkipIfAlreadyInstalled`. When that variable is set to `True`, Octopus Deploy will: 1. Check the `deploymentjournal.xml` to see if the package has already been installed. 2. If it hasn't been installed, then it will proceed with the deployment. 3. If it is installed, it will skip the deployment but still set the output variable `Octopus.Action[STEP NAME].Output.Package.InstallationDirectoryPath`. The rollback process in this section will use that functionality to update IIS and the Windows Service registration to point to those older (pre-existing) folders. The resulting process will be: 1. Calculate Deployment Mode 1. Run Database Creation Runbook (skip during rollback) 1. Deploy the OctoFX Database (skip during rollback) 1. Deploy the OctoFX Windows Service (with `Octopus.Action.Package.SkipIfAlreadyInstalled` set to `True`) 1. Deploy the OctoFX Website (with `Octopus.Action.Package.SkipIfAlreadyInstalled` set to `True`) 1. Update Windows Service Binary Path 1. Restart Windows Service 1. IIS Update Physical Path 1. Block Release Progression (only run during a rollback) 1. Verify the Application 1. Notify stakeholders :::figure ![Windows complex rollbacks](/docs/img/deployments/patterns/rollbacks/dotnet-windows-rollbacks/images/windows-complex-rollbacks.png) ::: :::div{.success} View that deployment process on [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/03-octofx-complex-rollback/deployments/process). Please log in as a guest. ::: ### Comparison to simple rollback process The complex rollback process and simple rollback process have some overlap. Please refer to the earlier section on how to configure these steps. 1. Add Calculate Deployment Mode step 1. Update Run Database Creation Runbook to skip during rollback 1. Update Deploy OctoFX database to skip during rollback 1. Add Block Release Progression step The primary difference between the simple and complex rollback process is the complex rollback process reuses the pre-existing extracted application. ### Add system variable to skip package deployment Adding the system variable `Octopus.Action.Package.SkipIfAlreadyInstalled` will skip already installed packages. That makes a lot of sense for rollbacks but less sense for regular deployments. To _only_ skip package installation for rollbacks, set the variable value to be: ``` #{if Octopus.Action[Calculate Deployment Mode].Output.DeploymentMode == "Deploy"}False#{else}True#{/if} ``` :::figure ![Windows skip if already installed](/docs/img/deployments/patterns/rollbacks/dotnet-windows-rollbacks/images/windows-skip-if-already-installed.png) ::: ### Windows Service rollback Updating the existing Windows Service to point to an earlier version of the application involves two steps. 1. [Update Windows Service Binary Path](https://library.octopus.com/step-templates/b6860fcf-9dee-48a0-afac-85e2098df692/actiontemplate-windows-service-change-binary-path) 1. [Restart Windows Service](https://library.octopus.com/step-templates/d1df734a-c0da-4022-9e70-8e1931b083da/actiontemplate-windows-service-restart) The binary path must include the application's .exe file. For example, `#{Octopus.Action[STEP NAME].Output.Package.InstallationDirectoryPath}\YOUR-EXE-FILE.exe`. For this guide, that value will be: ``` #{Octopus.Action[Deploy OctoFX Windows Service].Output.Package.InstallationDirectoryPath}\OctoFX.RateService.exe ``` Set the run condition for this step to: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` ### Website rollback In modern versions of IIS, updating the physical path is an instantaneous action. All traffic is routed to that new path. To do that, use the [IIS Website - Update Property](https://library.octopus.com/step-templates/34118a0e-f872-435a-8522-d3c7f8515cb8/actiontemplate-iis-website-update-property) step template. The parameters to set for this step template are: - Web site name: The name of your website - Name of property to set: `physicalPath` - Value of property to set: `#{Octopus.Action[Deploy OctoFX Website].Output.Package.InstallationDirectoryPath}` Set the run condition for this step to: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` :::div{.hint} If you are using application pools instead of websites, use [IIS AppPool - Update Property](https://library.octopus.com/step-templates/183c1676-cb8e-44e8-a348-bbcb2b77536e/actiontemplate-iis-apppool-update-property) step template. ::: ## Simple or complex rollback process We recommend starting with the simple rollback process first. That requires the least amount of changes while at the same time gives you the rollback functionality. Only move to the complex rollback process if you determine the simple rollback process isn't meeting a specific need. # Installing the Tentacle VM extension via the Azure CLI with Octopus Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-the-azure-cli.md :::div{.problem} The VM extension is deprecated and no longer supported. All customers using the VM extension should migrate to [DSC](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). ::: The VM Extension can be installed onto a virtual machine via the [Azure command line](https://docs.microsoft.com/en-us/azure/xplat-cli-install). The instructions are slightly different depending on whether you are using the [Resource](#AzureResourceManagerMode) model or the [Classic](#AzureServiceManagementMode) model. Refer to the [configuration structure](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/configuration-structure) for information regarding the format of the `publicSettings.json` and `privateSettings.json` files mentioned in these examples. :::div{.hint} If you need more the ability to customize more of the installation, you might want to consider using the [Azure Desired State Configuration (DSC) extension](https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview) in conjunction with the [OctopusDSC](https://www.powershellgallery.com/packages/OctopusDSC) resource. ::: ## Azure Resource Manager (ARM) mode \{#AzureResourceManagerMode} To install the extension on a VM: ```sh $ azure config mode arm info: Executing command config mode info: New mode is arm info: config mode command OK $ azure vm extension set --resource-group "" --vm-name "" --name "OctopusDeployWindowsTentacle" --publisher-name "OctopusDeploy.Tentacle" --version "2.0" --public-config-path "publicSettings.json" --private-config-path "privateSettings.json" info: Executing command vm extension set info: Looking up the VM "" info: Installing extension "OctopusDeployWindowsTentacle", VM: "" info: vm extension set command OK ``` To find out what extension versions are available: ```sh $ azure vm extension-image list --publisher "OctopusDeploy.Tentacle" --location "" info: Executing command vm extension list + Getting virtual machine extension image types (Publisher: "OctopusDeploy.Tentacle" Location:"") Publisher Type Version Location ---------------------- ---------------------------- ------- ------------- OctopusDeploy.Tentacle OctopusDeployWindowsTentacle 2.0.49 australiaeast OctopusDeploy.Tentacle OctopusDeployWindowsTentacle 2.0.50 australiaeast OctopusDeploy.Tentacle OctopusDeployWindowsTentacle 2.0.54 australiaeast ... ``` To find out what extensions are installed on a VM: ```sh $ azure vm extension get --resource-group "" --vm-name" " + Looking up the VM "" data: Publisher Name Version State data: ----------------- -------------------------- ------- -------- data: Microsoft.Compute WinRMCustomScriptExtension 1.4 Creating info: vm extension get command OK ``` To remove an extension from a VM: ```sh $ azure vm extension set --uninstall --quiet --resource-group "" --vm-name "" --name "OctopusDeployWindowsTentacle" --publisher-name "OctopusDeploy.Tentacle" --version "2.0" Executing command vm extension set Looking up the VM "" Looking up extension "OctopusDeployWindowsTentacle", VM: "" Uninstalling extension "OctopusDeployWindowsTentacle", VM: "" vm extension set command OK ``` ## Azure Service Management (ASM/Classic) mode \{#AzureServiceManagementMode} To install the extension on a VM: ```sh $ azure config mode asm info: Executing command config mode info: New mode is asm info: config mode command OK $ azure vm extension set "" "OctopusDeployWindowsTentacle" "OctopusDeploy.Tentacle" "2.0" --public-config-path "publicSettings.json" --private-config-path "privateSettings.json" info: Executing command vm extension set info: Getting virtual machines info: Updating vm extension info: vm extension set command OK ``` To find out what extension versions are available: ```sh $ azure vm extension list --publisher-name "OctopusDeploy.Tentacle" info: Executing command vm extension list + Getting extensions data: Publisher : OctopusDeploy.Tentacle data: Name : OctopusDeployWindowsTentacle data: Version : 2.0 data: Label : Octopus Deploy Tentacle ... ``` To find out what extensions are installed on a VM: ```sh $ azure vm extension get "" info: Executing command vm extension get + Getting virtual machines data: Publisher Extension name ReferenceName Version State data: -------------------- ---------------- -------------------------- ------- ------ data: OctopusDeploy.Ten... OctopusDeploy... OctopusDeployWindowsTen... 2.0 Enable info: vm extension get command OK ``` To remove an extension from a VM: ```sh $ azure vm extension set --uninstall "" "OctopusDeployWindowsTentacle" "OctopusDeploy.Tentacle" "2.0" info: Executing command vm extension set info: Getting virtual machines info: Uninstalling vm extension info: vm extension set command OK ``` # Permissions for the Octopus Windows Service Source: https://octopus.com/docs/installation/permissions-for-the-octopus-windows-service.md When you install the Octopus Server, you'll be asked whether Octopus should run as the Local System account, or as a custom user. It's a good practice to set up a dedicated user account for the Octopus Server. Keep in mind that the user principal that the Octopus service runs as needs to be able to do many things: 1. Run as a service ("Log on as a service" rights), so that the service can start. 1. Read and write the Octopus SQL Server Database. If the SQL database is on another server, this is a good reason to use a custom user account. 1. Read and write from the registry and file system (details below). 1. Read any NuGet feeds that use local folders or file shares. The following table acts as a guide for the minimal permission set that Octopus must have for successful operation: | Permission | Object | Reason | Applied with | | --- | --- | --- | --- | | Full control | The Octopus "Home" folder, e.g. `C:\Octopus` | Octopus stores logs, temporary data, and dynamic configuration in this folder. | Windows Explorer | | Read | The directory Octopus was installed to (typically C:\Program Files\Octopus Deploy) | Octopus needs these files in order to run. | Windows Explorer | | Read | The `HKLM\Software\Octopus` registry key | Octopus determines the location of its configuration files from this key. | Regedit | | Full control | The `OctopusDeploy` Windows Service | Octopus must be able to upgrade and restart itself for remote administration. | SC.EXE | | Listen | Port **10943** | Octopus accepts commands from polling Tentacles on this port. | NETSH.EXE | | Listen | Port **80** | The Octopus Server responds to browser requests on this port. | NETSH.EXE | | Listen | Port **443** | If using SSL, the Octopus Server responds to browser requests on this port. | NETSH.EXE | | db\_owner | For the SQL database. [Learn more](/docs/installation/sql-server-database). | Octopus needs to be able to manage its database, including making schema changes. | SQL Server Management Studio | If you rely on Octopus to run certain tasks on the Octopus Server, you'll also need to grant appropriate permissions for these. Examples include: - Using the Windows Azure deployment tasks in Octopus (these run on the Octopus Server). - Deploying to an [offline package drop](/docs/infrastructure/deployment-targets/offline-package-drop) deployment target. - Running a [custom script](/docs/deployments/custom-scripts) on the Octopus Server. ## Learn more - [Octopus installation](/docs/installation). # Best Practices Adviser Source: https://octopus.com/docs/octopus-ai/assistant/best-practices-adviser.md The Best Practices Adviser in Octopus AI Assistant helps you identify optimization opportunities and maintain healthy configurations across your Octopus Deploy instance. It analyzes your instance based on the prompt you provide to surface actionable recommendations for improving scalability, reducing technical debt, and ensuring you're following established best practices. The suggested prompts in the Octopus AI Assistant will surface examples to use the Best Practices Adviser. Suggested prompts will also change based on the context of the page you have open in Octopus Deploy. For example, if you open a project and launch Octopus AI Assistant, you will see these suggested prompts: ![Best Practices Adviser Screenshot](/docs/img/octopus-ai-assistant/best-practices-adviser.png) `Find unused variables to help improve the maintainability of the project` is a suggested prompt that will use the best practices advisor. In the screenshot (above), it is run in the context of a project called "k8s deployment", and the Best Practices Adviser found three unused variables. It then links to documentation to provide more information about the finding, and actionable remediation steps. Some example prompts you can use to surface recommendations are (not limited to): - Find duplicate project variables - Find unused variables - Help me find project variable values that look like plaintext passwords. - Suggest tenant tags to make tenants more manageable. - Find unused tenants - Find unused targets - Find unused projects # octopus account azure Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-azure.md Manage Azure subscription accounts in Octopus Deploy ```text Usage: octopus account azure [command] Available Commands: create Create an Azure subscription account help Help about any command list List Azure subscription accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account azure [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account azure list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Passing build information to Octopus Source: https://octopus.com/docs/packaging-applications/build-servers/build-information.md When deploying a release, it's useful to know which build produced the artifact, what commits it contained, and which work items it's associated with. Build information allows you to upload these details from your build server, either manually or with the use of a plugin, to Octopus Deploy. Build information is associated with a package and includes: - Build URL: A link to the build which produced the package. - Commits: Details of the source commits related to the build. - Work items: Issue references parsed from the commit messages. ## Passing build information to Octopus \{#passing-build-information-to-octopus} Build information is passed to Octopus as a file using a custom format. The recommended way to supply the build information is to add the _build information_ step from the Octopus Deploy plugin to your build server. ## Build server support \{#build-server-support} The build information step is currently available in the official Octopus Deploy plugins: - [GitHub Actions](/docs/packaging-applications/build-servers/github-actions) - [TeamCity](/docs/packaging-applications/build-servers/teamcity) - [Bamboo](/docs/packaging-applications/build-servers/bamboo) - [Jenkins](/docs/packaging-applications/build-servers/jenkins) - [TFS/AzureDevOps](/docs/packaging-applications/build-servers/tfs-azure-devops) Check our [downloads page](https://octopus.com/downloads) for our latest build server plugins. In addition to the official plugins, there are some community supported integrations available for: - [BitBucket Pipelines](https://bitbucket.org/octopusdeploy/octopus-cli-run/src/master) - [CircleCI](https://circleci.com/developer/orbs/orb/octopus-samples/octo-exp) - [Continua CI](/docs/packaging-applications/build-servers/continua-ci) Build information is independent of the packages that it relates to. You can pass build information to Octopus **before** the packages have been pushed to either the built-in repository or an external feed. You can also [push build information manually](https://octopus.com/blog/manually-push-build-information-to-octopus) using the Octopus REST API when you aren't using a build server. :::div{.warning} Commit messages and deep links may not be shown if an unsupported `VcsType` is passed to Octopus as part of the build information call. Currently we support values of `Git` and `TFVC` (TFS / Azure DevOps). `SVN` (Subversion) is **not supported**. Work items will not show unless you have one of the [issue tracker](/docs/releases/issue-tracking) integrations configured. ::: ## Build information step \{#build-information-step} All available plugins contain a build information step/task, the TeamCity version of the _build information_ step is shown below. :::figure ![TeamCity build information Step](/docs/img/packaging-applications/build-servers/build-information/images/build-information-step.png) ::: The build information step requires - Octopus URL: URL of your Octopus server - API Key: API key to use for uploading - (Optional) Space name: Name of the space to upload the build information to - Package ID: List of package IDs to associate the build information to. For maven packages hosted in external feeds the groupID and packageID are required, for more information see our [maven documentation](/docs/packaging-applications/package-repositories/maven-feeds#troubleshooting-maven-feeds)). - Package version: The version of the packages :::div{.hint} Verbose logging can be used to include more detail in the build logs. This includes a complete output of all build information being passed to Octopus, which can be useful when troubleshooting. ::: :::div{.hint} BuildInformationPush permission is required to push build information to Octopus. If `Overwrite Mode` is set to `Overwrite Existing` BuildInformationAdminister permission is also required. ::: ## Viewing build information \{#viewing-build-information} As of Octopus **2019.10.0**, the build information for a package can be viewed by navigating to **Deploy ➜ Manage ➜ Build Information** :::figure ![Library Build information](/docs/img/packaging-applications/build-servers/build-information/images/library-build-information-2.png) ::: The build information for a package can be viewed on any release which contains the package. :::figure ![Build information on release page](/docs/img/packaging-applications/build-servers/build-information/images/build-information-release-2.png) ::: For packages pushed to the Octopus built-in repository, the build information can also be viewed in the package version details by navigating to **Deploy ➜ Manage ➜ Packages** and selecting the package. :::figure ![Build information on package version page](/docs/img/packaging-applications/build-servers/build-information/images/build-information-package-version-2.png) ::: ## Using build information in release notes \{#build-info-in-release-notes} The build information associated with packages is available for use in [release notes](/docs/releases/release-notes) (and [release notes templates](/docs/releases/release-notes/#templates)) as Octopus variables. :::div{.info} When using build information in release notes in conjunction with [built-in package repository triggers (formerly known as _Automatic Release Creation_)](https://octopus.com/docs/projects/project-triggers/built-in-package-repository-triggers) the build information **must** be pushed to Octopus **before** the packages are pushed to Octopus as the release will be created as soon as the package configured for automatic release create is pushed. ::: See the [system variable documentation](/docs/projects/variables/system-variables/#release-package-build-information) for the available variables. ## Using build information in deployments \{#build-info-in-deployments} Package build information associated with a release will be also [captured in deployments](/docs/releases/deployment-changes) of the release. From Octopus **2024.2** build information can be viewed on deployments. :::figure ![Deployment build information](/docs/img/packaging-applications/build-servers/build-information/images/deployment-build-information.png) ::: :::div{.warning} Ensure you're using [pre-release versions](/docs/releases/deployment-changes#versioning) for any releases that aren't intended to be a production release. Any releases that aren't a pre-release will be treated as a full release by Octopus, which can result in deployments containing a larger amount of build information than intended. ::: ## Build information with dynamic packages \{#build-info-with-dynamic-packages} Build information may not appear in a release or deployment if you [dynamically select a package ID at deployment time](/docs/deployments/packages/dynamically-selecting-packages). See the [dynamic package tradeoffs](/docs/deployments/packages/dynamically-selecting-packages#dynamic-packages-and-issue-trackers) section for more information. # Active Directory authentication Source: https://octopus.com/docs/security/authentication/active-directory.md :::div{.hint} Active Directory authentication can only be configured for Octopus Server and not for [Octopus Cloud](/docs/octopus-cloud/). See our [authentication provider compatibility](/docs/security/authentication/auth-provider-compatibility) section for further information. ::: Octopus Deploy can authenticate users using Windows credentials. Windows AD authentication can be chosen during installation of the Octopus Server, or later through the configuration. ## Domain user required during setup When setting AD Authentication, either via the Octopus setup wizard or running the commands outlined below to switch to AD authentication mode, make sure you are signed in to Windows as a domain user. If you are signed in as a local user account on the machine (a non-domain user) you won't be able to query Active Directory, so setup will fail. ## Active Directory sign-in options If you are using Active Directory Authentication with Octopus, there are two ways to sign in. 1. Integrated authentication 2. Forms-based ## Authentication schemes By default, Active Directory Authentication will use NTLM as the Authentication Scheme. In many circumstances, you can also configure Octopus to use Kerberos for authentication. If you would like to use Kerberos for authentication, you will need to use User Mode authentication (Kestrel). By default, Active Directory authentication for Octopus Deploy runs in Kernel Mode via HTTP.sys. The mode is dictated by the web server running Octopus Deploy, which can be configured using the `configure` command. Select HTTP.sys for Kernel Mode, or Kestrel for User Mode: ### Kernel Mode authentication via HTTP.sys (default) - Command Line Select this mode if you require features of HTTP.sys, such as port sharing. This mode supports NTLM in both single server and High Availability configurations. ```bash Octopus.Server.exe configure --webServer=HttpSys ``` ### User Mode authentication via Kestrel - Command Line Select this mode if you require Kerberos authentication. ```bash Octopus.Server.exe configure --webServer=Kestrel ``` ## Integrated authentication The easiest way to sign in when using Active Directory is to use Integrated Authentication. This allows a one-click option to *Sign in with a domain account* as pictured below. :::figure ![Login Screen](/docs/img/security/authentication/active-directory/images/activedirectory-integrated.png) ::: This will instruct the Octopus Server to issue a browser challenge. NTLM Authentication doesn't require much configuration except for allowing NTLM to be used in your network. This is on by default. ### Changing authentication schemes - Command Line ```bash Octopus.Server.exe configure --webAuthenticationScheme=IntegratedWindowsAuthentication ``` Setting `IntegratedWindowsAuthentication` will mean that Octopus will attempt to use Kerberos Authentication instead. [Read about other supported values](https://msdn.microsoft.com/en-us/library/system.net.authenticationschemes(v=vs.110).aspx). :::div{.hint} **How it works** Octopus is built on top of HTTP.sys, the same kernel driver that IIS is built on top of. You may be familiar with "Integrated Windows Authentication" in IIS; this is actually provided by HTTP.sys. This means that Octopus supports the same challenge-based sign-in mechanisms that IIS supports, including Integrated Windows Authentication. When the link is clicked, it redirects to a page which is configured to tell HTTP.sys to issue the browser challenge. The browser and HTTP.sys negotiate the authentication just like an IIS website would. The user principal is then passed to Octopus. Octopus will then query Active Directory for other information about the user. ::: ### Kerberos vs NTLM security for AD authentication It is possible to use explicitly select either `NTLM`, `Negotiate` or `IntegratedWindowsAuthentication` authentication for Active Directory authentication. Using `Negotiate` or `IntegratedWindowsAuthentication` will use Kerberos authentication. In some cases this may result in `NTLM` connections based on the nature of the connecting client. This table describes the options you can choose in Octopus, and the protocols that may be used to authenticate your users as a result. | Octopus Option | Protocols Used | |---------------------------------|-----------------------| | NTLM | NTLM | | Negotiate | Kerberos, NTLM | | IntegratedWindowsAuthentication | Kerberos, NTLM | Without some additional configuration, AD authentication, whether forms-based or integrated, will usually fail to negotiate the use of `kerberos` authentication and instead choose `NTLM`. ### Supported setups for Active Directory authentication {#supported-active-directory-setups} Octopus Deploy supports various options for Active Directory authentication. Both HTTP.sys and Kestrel web server modes are compatible with High Availability configurations. The choice of web server determines which authentication protocols are available. | Octopus Option | HTTP.sys (Kernel Mode) | Kestrel (User Mode) | |---------------------------------|------------------------|----------------------| | NTLM | Yes | Yes | | Negotiate | NTLM only | Kerberos, NTLM | | IntegratedWindowsAuthentication | NTLM only | Kerberos, NTLM | :::div{.hint} **Service Accounts and Kerberos** From Octopus version 2020.1.0 and above, an upgrade to .Net Core 3.1 and usage of the HTTP.sys library, the Octopus Deploy Service running with Domain Service Account credentials, does not have the ability to read the HttpContext.User.Identity.Name property which is used for Kerberos authentication. There is a requirement to run the Octopus Deploy Service as Local System in order to allow for Kerberos to successfully Authenticate. You can read more about this in [GitHub issue #6602](https://github.com/OctopusDeploy/Issues/issues/6602). ::: ### Configuring Kerberos authentication for Active Directory {#configuring-kerberos} Here's a simple checklist to help you on your way to allowing Kerberos Authentication. 1. Change the Authentication Scheme. 2. Set the Octopus Deploy HTTP/S Bindings to use a Fully Qualified Domain Name (FQDN) or NETBIOS name as per your usage. 3. Add the Octopus Deploy URL to the [list of Trusted Sites](/docs/security/authentication/active-directory/#group-policy-trusted-sites). 4. Allow Automatic logon via a browser. 5. Set the appropriate SPNs. 6. Enable AES256 encryption for Kerberos tickets. - A valid Service Principal Name (SPN) for the `HTTP` service class for each Octopus host NETBIOS name. If you are accessing your Host via its FQDN then you will need to also add an FQDN also for the `HTTP` service class. (Please Note: Whether you've configured your Octopus host to use `HTTP` or `HTTPS`, you will only need to set an `HTTP` SPN.) - Included FQDNs of all Octopus Deploy Hosts and Octopus clusters within your trusted sites or Intranet zones. - Client Machines configured to allow auto logon with current username and password. #### SPN Configuration Set an `HTTP` service class SPN for the NETBIOS name and FQDN of your OD hosts. For example, if you are hosting `od.domain.local` from server `server1` you will require the following registered service principal names for your server: ```text HTTP/od HTTP/od.domain.local ``` These can be registered by running the following commands in an elevated command prompt or PowerShell session: ```shell setspn.exe -S HTTP/od server1 setspn.exe -S HTTP/od.domain.local server1 ``` :::div{.hint} **HA Clusters** Kerberos authentication in a High Availability environment requires configuring Octopus to use Kestrel (User Mode). Please refer to our section on [Supported Setups for Active Directory Authentication](#supported-active-directory-setups). ::: For more information about configuration of SPNs [please see this Microsoft support article](https://support.microsoft.com/en-us/help/929650/how-to-use-spns-when-you-configure-web-applications-that-are-hosted-on). #### Internet Security Configuration - Adding Octopus to the Trusted Zone The aim here is to allow the current user's logon credentials to be sent through to Octopus and authenticated against the SPNs. It is important to remember that a URI is considered to be in the "Internet Zone" whenever it contains a `.`. ```text Internet Zone http://host.local http://192.168.x.x http://127.0.0.1 http://octopus.yourdomain.com http://clusterurl.yourdomain.com Intranet Zone http://host http://local ``` Accessing a host via the NETBIOS name will mean that the "Intranet zone" rules will be applied. **This can be overruled by adding the NETBIOS name to "Trusted Sites" list**. (More detail in this [Microsoft support article](https://support.microsoft.com/en-au/help/303650/intranet-site-is-identified-as-an-internet-site-when-you-use-an-fqdn-o)). The recommend way to configure this, is to add all potential URIs that will be used to access Octopus, to the "Trusted Sites" list. This can be done in several ways including via Group Policy, scripting or via [internet security settings menu](https://www.computerhope.com/issues/ch001952.htm). #### Internet Security Configuration - Allow Automatic logon via browser All **client machines** will need to be configured to allow automatic logon. We can set this option on all sites added to the trusted sites zone. This can be done via Group Policy, scripting or via the internet security settings menu. To enable the option via the Internet Security Settings **Internet Explorer** go to **Tools ➜ Internet Options ➜ Security** tab, Select "Trusted Zones" then **Custom level...**. **Windows 10/Windows Server** Search for "Internet Options" or open **Control Panel ➜ Network and Internet ➜ Internet Options**. In the **Security Settings - Internet Zone** window, go to **User Authentication ➜ Logon** and select **Automatic logon with current username and password**. :::figure ![Client Security](/docs/img/security/authentication/active-directory/images/clientsecurity.png) ::: ### Adding Trusted Sites via Group Policy Object {#group-policy-trusted-sites} To set trusted sites via GPO: 1. Open the **Group Policy Management Editor**. 1. Go to **User Configuration ➜ Policies ➜ Administrative Templates ➜ Windows Components ➜ Internet Explorer ➜ Internet Control Panel ➜ Security Page**. 1. Select the **Site to Zone Assignment List**. 1. Select **Enabled** and click Show to edit the list. Zone value 2 is for trusted sites. 1. Click **OK** then **Apply** and **OK**. ### Allowing auto logon via Group Policy Object 1. Open the **Group Policy Management Editor**. 1. Go to **User Configuration ➜ Policies ➜ Administrative Templates ➜ Windows Components ➜ Internet Explorer ➜ Internet Control Panel ➜ Security Page**. 1. Select the **Logon Options**. 1. Select **Enabled** and click the drop-down menu that has appeared. 1. Select **Automatic logon with current username and password**. 1. Click **OK** That is all the is needed for kerberos to be used as the logon method when using integrated sign-in or Forms-based authentication. ## Forms-based authentication with Active Directory Octopus allows users to sign in by entering their Active Directory credentials to login. This is useful if users sometimes need to authenticate with a different account than the one they are signed in to Windows as, or if network configuration prevents integrated authentication from working correctly. :::figure ![Login Screen](/docs/img/security/authentication/active-directory/images/activedirectory-forms.png) ::: :::div{.hint} **How it works** Using this option, the credentials are posted back to the Octopus Server, and Octopus validates them against Active Directory by invoking the Windows API `LogonUser()` function. If that is successful, Octopus will then query Active Directory for information about the user. Keep in mind that if your Octopus Server isn't [configured to use HTTPS](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https), these are posted in plain text (just like signing in to any other website). ::: If the Octopus Server and its users are on the **same domain**, it is sufficient to provide a simple username in this field, for example *paul*. User Principal Names, of the form `user@domain.com` are also accepted in this scenario. If the server and its users are on different domains, or **many domains** are in use, the *DOMAIN\user* username format must be provided for users who are not a member of the domain the server is in. See below for more details and examples related to Trusted Domains. :::div{.hint} Users will receive the error "**Username not found. UPN format may not be supported for your domain configuration."** if they have entered a UPN and their details could not be located in the domain. This could occur because the UPN really doesn't exist, or it exists in a domain other than the one the Octopus Server is in (which as stated above is not supported). ::: Forms-based authentication can also be disabled: ### Disabling HTML form sign-in ```bash Octopus.Server.exe configure --allowFormsAuthenticationForDomainUsers=false ``` This will result in integrated sign in being the only option: :::figure ![Integrated Sign In Only](/docs/img/security/authentication/active-directory/images/activedirectory-integrated-only.png) ::: ## Switching between username/password and Active Directory Authentication It is possible to reconfigure an existing Octopus Server to use a different authentication mode. ### Select Active Directory authentication To switch from username/password authentication to Active Directory authentication, use the following script from an administrative command prompt on the Octopus Server: #### Selecting Active Directory authentication ```bash Octopus.Server.exe configure --activeDirectoryIsEnabled=true Octopus.Server.exe configure --usernamePasswordIsEnabled=false Octopus.Server.exe admin --username=YOUR_USERNAME ``` The text `YOUR_USERNAME` should be your Active Directory account name, in either **user@domain** or **domain\user** format (see [Authentication Providers](/docs/security/authentication)). ### Select username/password authentication To switch from Active Directory authentication to username/password authentication, use the following script from an administrative command prompt on the Octopus Server: #### Switching to username/password authentication ```bash Octopus.Server.exe configure --activeDirectoryIsEnabled=false Octopus.Server.exe configure --usernamePasswordIsEnabled=true Octopus.Server.exe admin --username=YOUR_USERNAME ``` ### Specify a custom container In **Octopus 2.5.11** and newer you can specify a custom container to use for AD Authentication. This feature addresses the issue of authenticating with Active Directory where the Users container is not in default location and permissions prevent queries as a result. Specifying the container will result in the container being used as the root of the context. The container is the distinguished name of a container object. All queries are performed under this root which can be useful in a more restricted environment. This may be the solution if you see a "The specified directory service attribute or value does not exist" error when using Active Directory authentication. #### Setting a custom container ```bash Octopus.Server.exe configure --activeDirectoryContainer "CN=Users,DC=GPN,DC=COM" ``` Where `"CN=Users,DC=GPN,DC=COM"` should be replaced with your Container. ### Trusted domains Using Trusted Domains is supported by Octopus Deploy. Users from the domain the Octopus Server is a member of will always be allowed to log in. Users from domains that the Octopus Server's domain trusts will also be able to log in. The following diagram illustrates a typical configuration when there is a two-way trust between the domains. :::figure ![Two-way Trust](/docs/img/security/authentication/active-directory/images/domains-twoway.png) ::: In this configuration the Octopus Server is executing as a service account from the same domain that the machine is a member of. When logging in, users from DomainA can use their AD username or UPN whereas users from DomainB must use *DOMAIN\user* username format. This is required so that the API calls Octopus makes can locate the domain controller for the correct domain (DomainB in this example). Another common scenario is to have a one way trust between the domains. This configuration is illustrated in the following diagram :::figure ![One-way Trust](/docs/img/security/authentication/active-directory/images/domains-oneway.png) ::: In this example, DomainA trusts DomainB. Given that both domains trust users from DomainB, the Octopus service should be configured to run as an account from DomainB. If the service was configured to run as an account from DomainA then users from DomainB wouldn't be able to log in and Octopus wouldn't be able to query group information from DomainB. Learn about [configuring Teams to use Trusted Domains](/docs/security/users-and-teams/external-groups-and-roles). ## Learn more The following topics are explained further in this section: - [Moving your Octopus Server to another Active Directory domain](/docs/security/authentication/active-directory/moving-active-directory-domains) - [Specify a custom container for AD authentication](/docs/security/authentication/active-directory/custom-containers-for-ad-authentication) - [Troubleshooting Active Directory integration](/docs/security/authentication/active-directory/troubleshooting-active-directory-integration) # Configuring Keycloak Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-keycloak.md Authentication using [Keycloak](https://keycloak.org), a self-hosted identity management service. To use Keycloak authentication with Octopus you will need to: 1. Configure Keycloak to trust your Octopus Deploy instance (by setting it up as an app in Keycloak). 2. Configure your Octopus Deploy instance to trust and use Keycloak as an Identity Provider. ## Configure Keycloak 1. [Install Keycloak](https://www.keycloak.org/guides#getting-started) using one of the many supported installation methods. 1. Open the **Keycloak Administration Console** in your web browser. 1. Navigate to **Manage Realms** and open or create the realm you wish to add Octopus Deploy to. :::div{.hint} Keycloak supports multiple realms, where each realm contains applications, users and groups. The `master` realm is typically used just for managing Keycloak and creating other realms, and we recommend that you [create a new realm](https://www.keycloak.org/docs/latest/server_admin/#proc-creating-a-realm_server_administration_guide) for managing other applications, including Octopus Deploy. :::div 1. If you want Keycloak to provide group membership information, you'll first need to create a new client scope. Under **Client scopes**, click **Create client scope**. Enter the following values: - **Name**: `groups` - **Type**: `optional` - **Protocol**: `OpenID Connect` ![Create Groups Client Scope](/docs/img/security/authentication/keycloak/create-client-scope.png) 1. Click **Save**. Click the **Mappers** tab, then click **Add predefined mapper** and search for `groups` in the list. Check the box beside `groups` then click **Add**. ![Add Predefined Groups Mapper](/docs/img/security/authentication/keycloak/add-predefined-groups-mapper.png) 1. Navigate to **Clients** and click **Create client** to create a new client that represents Octopus Deploy: - **Client Type**: `OpenID Connect` - **Client ID**: can be anything, the domain name of the Octopus Deploy server is a good option - **Name**: `Octopus Deploy` or the domain name of the Octopus Deploy server is a good option ![Create Client General Settings](/docs/img/security/authentication/keycloak/create-client-general-settings.png) 1. On the **Capability config** screen, ensure that **Client Authentication** is enabled and choose `S256` for the **PKCE Method**. ![Create Client Capability Config](/docs/img/security/authentication/keycloak/create-client-capability-config.png) 1. On the **Login settings** screen, configure the URLs for Octopus Deploy. The **Root**, **Home**, **Post Logout** and **Web Origins** URLs should all be the URL of the Octopus Deploy server. The **Valid redirect URIs** should be `https://your-octopus-url/api/users/authenticatedToken/GenericOidc` (replacing `https://your-octopus-url` with the URL of your Octopus Server). Click **Save**. ![Create Client Login Settings](/docs/img/security/authentication/keycloak/create-client-login-settings.png) 1. Navigate to **Client scopes**, click **Add client scope** and check the box beside `groups`, to allow Octopus Deploy to request group membership information. Click **Add**. 1. Now collect the details you'll need for configuring Octopus Server: - **Issuer URL**: the root URL for your Keycloak server, followed by `/realms/` and the name of the realm, eg: `https://keycloak-server/realms/company` if the name of the realm is `company`. - **Client ID**: as configured above, and shown on the client page on the **Settings** tab - **Client Secret**: on the client page, go to the **Credentials** tab, then click the copy button beside **Client Secret**. ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** should be set to `groups`, to reference the custom claim created earlier. - **Username Claim Type** should be set to `preferred_username`. - **Resource** should be left unset. - **Scopes** should be left as the default of `openid profile email`. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer**, **Client ID** and **Client Secret** should be the values you noted when creating the application. - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. ### Assign external groups to Octopus teams (optional) If you want to use groups in Keycloak to manage permissions in Octopus Deploy, you can assign those groups to **Teams** in the Octopus Portal. 1. Open the Octopus Portal and select **Configuration ➜ Teams**. 1. Either create a new **Team** or choose an existing one. 1. Under the **Members** section, select the option **Add External Group/Role**. ![Adding Octopus Teams from external providers](/docs/img/security/authentication/images/add-octopus-teams-external.png) 1. Enter the name of the Keycloak group as the **Group/Role ID** and then choose the name that should be displayed in Octopus, then click **Add**. In this example, we're adding an existing Keycloak group called `octopusTesters`. ![Add Octopus Teams Dialog](/docs/img/security/authentication/images/add-octopus-teams-external-dialog.png) 1. Save your changes by clicking the **Save** button. ### Octopus user accounts are still required Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. :::div{.hint} **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a matching **Identifier**. If there is no match, Octopus looks for a user account with a matching **Email Address**. If a user account is found, the External Identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. ::: :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Troubleshooting If you are having difficulty configuring Octopus to authenticate with Keycloak, check your [server logs](/docs/support/log-files) for warnings and check the Keycloak logs. You may need to enable logging in Keycloak if it's not already turned on, by going to `Realm settings` then the `Events` tab. ### Double- and triple-check your configuration Unfortunately security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember things are case-sensitive. - Remember to remove or add slash characters - they matter too! ### Check OpenID Connect metadata is working You can see the OpenID Connect metadata by going to the Issuer address in your browser adding `/.well-known/openid-configuration` to the end. In our example this would have been something like `https://keycloak-server/realms/company/.well-known/openid-configuration` ### Contact Octopus Support If you aren't able to resolve the authentication problems yourself using these troubleshooting tips, please reach out to our [support team](https://octopus.com/support) with: 1. The contents of your OpenID Connect Metadata or the link to download it (see above). 2. A screenshot of the Octopus User Accounts, including their username, email address, and name. # Managing runbook resources Source: https://octopus.com/docs/best-practices/platform-engineering/managing-runbook-resources.md [Serializing and deploying runbook resources](https://www.youtube.com/watch?v=mPBeqOwkY4Q) [Runbooks](/docs/runbooks) are a component of a project, sharing or referencing much of the project's configuration such as variables, connected variable sets, tenants, and lifecycles. However, it can be useful to treat runbooks as an independently deployable artifact. This allows a common runbook to be shared across many projects. Runbooks can be defined as a Terraform module and applied to an existing project, effectively "deploying" the runbook into the project. :::div{.hint} Runbooks are not managed by Config-as-code. ::: Runbooks can be defined in a Terraform module in two ways: - Write the module by hand - Serialize an existing project to a Terraform module with [octoterra](https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport) ## Writing by hand The process of defining a runbook in Terraform is much the same as defining a project. Both a runbook and a project have the concept of a deployment process that defines the steps to be run. See the [Managing project resources](/docs/platform-engineering/managing-project-resources) section for more information on defining steps in Terraform by hand. ## Serializing with octoterra The second approach is to create a management, or upstream, runbook using the Octopus UI and then export the runbook to Terraform modules with [octoterra](https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport). This allows you to rely on the UI for convenience and validation and then serialize the runbook to a Terraform module. :::div{.hint} You are free to edit the Terraform module created by octoterra as you see fit once it is exported. ::: Octopus includes a number of steps to help you serialize a runbook with octoterra and apply the module to a new space. :::div{.hint} The steps documented below are best run on the `Hosted Ubuntu` worker pools for Octopus Cloud customers. ::: 1. Create a project with a runbook called `__ Serialize Runbook`. Runbooks with the prefix `__ ` (two underscores and a space) are automatically excluded when exporting projects, so this is a pattern we use to indicate runbooks that are involved in serializing Octopus resources but are not to be included in the exported module. 2. Add the `Octopus - Serialize Runbook to Terraform` step from the [community step template library](/docs/projects/community-step-templates). 1. Tick the `Ignore All Changes` option to instruct Terraform to ignore any changes made to a project through the UI using the [lifecycle meta-argument](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle). Enabling this option allows the runbook to be edited via the UI once it is deployed and Terraform will not overwrite those changes when reapplying the module. Leave the option disabled to have Terraform overwrite any changes to the downstream runbook when the module is reapplied. 2. Set the `Terraform Backend` field to the [backend](https://developer.hashicorp.com/terraform/language/settings/backends/configuration) configured in the exported module. The step defaults to `s3`, which uses an S3 bucket to store Terraform state. However, any backend provider can be defined here. 3. Set the `Octopus Server URL` field to the URL of the Octopus server to export a space from. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 4. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used to access the instance defined in the `Octopus Server URL` field. 5. Set the `Octopus Space ID` field to the ID of the space to be exported. The default value of `#{Octopus.Space.Id}` references the current space. 6. Set the `Octopus Project Name` field to the name of the project that contains the runbook to be exported. The default value of `#{Octopus.Project.Name}` references the current project. 7. Set the `Octopus Runbook Name` field to the name of the runbook to serialize. 8. Set the `Octopus Upload Space ID` field to the ID of another space to upload the resulting Terraform module zip file to the built-in feed of that space. Leave this field blank to upload the zip file to the built-in feed of the current space. Executing the runbook will: - Export the runbook to a Terraform module - Zip the resulting files - Upload the zip file to the built-in feed of the current space or the space defined in the `Octopus Upload Space ID` field The zip file has one directory called `space_population` which contains a Terraform module to populate a space with the exported resources. :::div{.hint} Many of the exported resources expose values, like resource names, as Terraform variables with default values. You can override these variables when applying the module to customize the resources, or leave the Terraform variables with their default value to recreate the resources with their original names. ::: ## Dealing with project variables The exported module defines only the runbook and the runbook deployment process. It does not define other project level resources like project variables or variable sets. Any project that the exported runbook is added to is expected to define all the variables referenced by the runbook. Any project level variables required by the runbook can be defined as Terraform resources and deployed alongside the exported runbook module. The instructions documented in the [Managing project resources](/docs/platform-engineering/managing-project-resources) section can be used to export a project to a Terraform module. The project level variables can be copied from the exported project module and placed in their own module as needed. ## Importing a runbook The following steps create a runbook in an existing project with the Terraform module exported using the instructions from the previous step: 1. Create a project with a runbook called `__ Deploy Runbook`. Runbooks with the prefix `__ ` (two underscores and a space) are automatically excluded when exporting projects, so this is a pattern we use to indicate runbooks that are involved in serializing Octopus resources but are not to be included in the exported module. 2. Add one of the steps called `Octopus - Add Runbook to Project` from the [community step template library](/docs/projects/community-step-templates). Each step indicates the Terraform backend it supports. For example, the `Octopus - Add Runbook to Project (S3 Backend)` step configures a S3 Terraform backend. 1. Configure the step to run on a worker with a recent version of Terraform installed, or use the `octopuslabs/terraform-workertools` container image. 2. Set the `Terraform Workspace` field to a [workspace](https://developer.hashicorp.com/terraform/language/state/workspaces) that maintains the state of Octopus resources created by Terraform. The default value of `#{OctoterraApply.Octopus.SpaceID}_#{OctoterraApply.Octopus.Project | Replace "[^A-Za-z0-9]" "_"}` uses a workspace based on the ID of the space and the name of the project that is being populated. Leave the default value unless you have a specific reason to change it. 3. Select the package created by the export process in the previous section in the `Terraform Module Package` field. The package name is the same as the exported runbook name, with all non-alphanumeric characters replaced with an underscore. 4. Set the `Octopus Server URL` field to the URL of the Octopus server to create the new project in. The default value of `#{Octopus.Web.ServerUri}` references the URL of the current Octopus instance. 5. Set the `Octopus API Key` field to the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) used when accessing the instance defined in the `Octopus Server URL` field. 6. Set the `Octopus Space ID` field to the ID of an existing space where the project will be created. 7. Set the `Octopus Project Name` field to the name of the project to deploy the runbook into. 8. Set the `Terraform Additional Apply Params` field to a list of additional arguments to pass to the `terraform apply` command. This field is typically used to override the name of the runbook e.g. `"-var=runbook_eks_octopub_audits____describe_pods_name=The New Runbook Name"`. Leave this field blank if you do not wish to customize the deployed runbook. 9. Set the `Terraform Additional Init Params` field to a list of additional arguments to pass to the `terraform init` command. Leave this field blank unless you have a specific reason to pass an argument to Terraform. 10. Each `Octopus - Add Runbook to Project` step exposes values relating to their specific Terraform backend that must be configured. For example, the `Octopus - Octopus - Add Runbook to Project (S3 Backend)` step exposes fields to configure the S3 bucket, key, and region where the Terraform state is saved. Other steps have similar fields. Typically, downstream spaces are represented by tenants in the upstream space. For example, the space called `Acme` is represented by a tenant wth the same name. Configuring the `__ Deploy Runbook` runbook to run against a tenant allows you to manage the creation and updates of downstream projects with a typical tenant based deployment process. To resolve a downstream space with the name of a tenant to its ID, as required by the `Octopus - Populate Octoterra Space` step, you can use the `Octopus - Lookup Space ID` step from the [community step template library](/docs/projects/community-step-templates). To use the `Octopus - Lookup Space ID` step, add it before the `Octopus - Populate Octoterra Space` step and then reference the space ID as an output variable with an octostache template like `#{Octopus.Action[Octopus - Lookup Space ID].Output.SpaceID}`. Executing the runbook will create a new runbook in an existing project. Any space level resources referenced by the project are resolved by the resource name using Terraform [data sources](https://developer.hashicorp.com/terraform/language/data-sources), so the runbook can be imported into any space with the correctly named space level resources. ### Updating project resources The runbooks `__ Serialize Runbook` and `__ Deploy Runbook` can be run as needed to serialize any changes to the upstream runbook and deploy the changes to downstream runbooks. The Terraform module zip file pushed to the built-in feed is versioned with a unique value each time, so you can also revert changes by redeploying an older package. In this way, you can use Octopus to deploy Octopus runbooks using the same processes you use Octopus to deploy applications. # Azure environments with Octopus Source: https://octopus.com/docs/deployments/azure/azure-environments.md The vast majority of Azure users and subscriptions operate in the AzureCloud environment. There are also an increasing number of other Azure environments, for example Azure Germany, Azure China, Azure US Gov, Azure US DoD. These are designed to be isolated from other cloud environments, and as such have their own hosting and API endpoints etc. In order to deploy to these environments from Octopus Deploy the endpoint configuration must therefore be overridden. This page describes how to go about overriding the values. The defaults for all settings related to the environment are blank, which denotes the use of the AzureCloud environment. The first thing you are going to need when overriding the values is to know what the endpoints are for your target environment. You can get these using the following PowerShell command (Note: you have to have the Azure PowerShell modules loaded). ```powershell Get-AzureEnvironment ``` You'll usually see a number of entries displayed. Below is the details for one of the environments: :::figure ![Azure Germany cloud details](/docs/img/deployments/azure/images/de.png) ::: Armed with that information you now need to head over to the Azure Account page in Octopus Deploy. Depending on the authentication method (Management Certificate or Service Principal) the UI will look slightly different. Service Principal accounts will appear as follows: :::figure ![Service Principal fields](/docs/img/deployments/azure/images/sp.png) ::: And Management Certificate accounts as below: :::figure ![Management Certificate fields](/docs/img/deployments/azure/images/mc.png) ::: Once you have entered the environment name and endpoint values you should **Save and Test** the account. ## Step templates Whenever you are using an Azure step template, once you've selected an account its settings will be used to determine the endpoints for all API operations. So lists like Resource Groups and Web Apps will be loaded using the endpoints defined by the Account. ## Calamari and deployments When a deployment executes, the values for the environment and endpoints will be passed to Calamari if they have been overridden (i.e. they aren't blank). You will be able to see the values if you have [OctopusPrintVariables set to true](/docs/support/debug-problems-with-octopus-variables/#write-variables-to-deployment-log) and Calamari will also always log an information message to tell you if it's using overridden values and what they are. ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites) # Using a Managed Service Account (MSA) Source: https://octopus.com/docs/installation/managed-service-account.md You can run the Octopus Server using a Managed Service Account (MSA): 1. Install the Octopus Server and make sure it is running correctly using one of the built-in Windows service accounts or a custom account. 1. Reconfigure the Octopus Server Windows service to use the MSA, either manually using the Service snap-in, or using `sc.exe config "OctopusDeploy" obj= Domain\Username$`. 1. Restart the Octopus Server Windows service. Learn about [using Managed Service Accounts](https://technet.microsoft.com/en-us/library/dd548356(v=ws.10).aspx). ## Learn more - [Octopus installation](/docs/installation) # octopus account azure create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-azure-create.md Create an Azure subscription account in Octopus Deploy ```text Usage: octopus account azure create [flags] Aliases: create, new Flags: -d, -- string A summary explaining the use of the account to other users. --ad-endpoint-base-uri string Set this only if you need to override the default Active Directory Endpoint. --application-id string Your Azure Active Directory Application ID. --application-key string The password for the Azure Active Directory application. --azure-environment string Set only if you are using an isolated Azure Environment. Configure isolated Azure Environment. Valid option are AzureChinaCloud, AzureChinaCloud, AzureGermanCloud or AzureUSGovernment -D, --description-file file Read the description from file -e, --environment stringArray The environments that are allowed to use this account -n, --name string A short, memorable, unique name for this account. --resource-management-base-uri string Set this only if you need to override the default Resource Management Endpoint. --subscription-id string Your Azure subscription ID. --tenant-id string Your Azure Active Directory Tenant ID. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account azure create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Configuring Microsoft Entra ID Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-microsoft-entra.md ## Configure Microsoft Entra ID [How to configure Microsoft Entra ID](/docs/security/authentication/azure-ad-authentication#configure-microsoft-entra-id) ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** is optional, but set this to `roles` if you [want to automatically assign users to teams](/docs/security/authentication/azure-ad-authentication#assign-app-registration-roles-to-octopus-teams-optional). - **Username Claim Type** set to `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`. - **Resource** should be left unset. - **Scopes** should be left as the default of `openid profile email`. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer** should be a URL like `https://login.microsoftonline.com/GUID` where the `GUID` is a particular GUID identifying your Microsoft Entra ID tenant. This is the **Directory (tenant) ID** in the Azure App Registration Portal. - **Client ID** which should be a GUID. This is the **Application (client) ID** in the Azure App Registration Portal. - **Client Secret** which should be a long string value. This is the **Value** of a client secret in the Azure App Registration Portal. :::div{.hint} Note that the value of **Client Secret** cannot be retrieved once set - it can only be changed or deleted ::: - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. # Expose the Octopus Web Portal over HTTPS Source: https://octopus.com/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https.md The Octopus Web Portal is the main interface that your team will use to interact with the Octopus Server. During installation, you'll choose a port number for the server to listen on, and it uses HTTP by default. However, Octopus can also be configured to run on HTTPS. You can force all traffic to use HTTPS and even enable HSTS if desired. Octopus supports different types of SSL certificates, with built-in support for [Let's Encrypt](/docs/security/exposing-octopus/lets-encrypt-integration) to make HTTPS as simple as possible. ## Choosing an SSL certificate Octopus can use any valid SSL certificate, whether it is from a Certificate Authority, managed by Let's Encrypt, or even a self-signed certificate. The easiest way to get started with HTTPS is to use [Let's Encrypt](/docs/security/exposing-octopus/lets-encrypt-integration) which is trusted and free to use forever in production systems. ### Use Let's Encrypt to manage your SSL certificate Let's Encrypt is the best way to get started with HTTPS in Octopus. It is a trusted and free service, which automatically renews your SSL certificate so you don't need to worry about expiry. We have built a wizard to do all the hard work so you can get up and running with HTTPS in a couple of minutes. [Get started with Let's Encrypt](/docs/security/exposing-octopus/lets-encrypt-integration). ### Bring your own SSL certificate {#import-ssl-certificate} You can use your own SSL certificate which could be signed by any trusted Certificate Authority. If the certificate you intend to use doesn't exist in the Windows certificate store already, you'll need to import it from a PFX file containing both the public certificate and private key. The following steps will show you how to import your certificate: 1. Launch an empty Microsoft Management Console shell by running **mmc.exe** from the start menu, command line or Win+R run dialog. 1. From the File menu, click Add/Remove Snap-in... 1. Add the Certificates snap-in, and when prompted, choose the Computer account scope: 1. On the "Select Computer" page of the Wizard, select the **Local computer**, then click **Finish**. Click **OK** to close the Add/Remove Snap-ins dialog. 1. You can either import the certificate to the **Personal** store, or the **Web Hosting** store (this store may or may not exist on your server). Expand to the Certificates directory, then open the import wizard: ![](/docs/img/security/exposing-octopus/images/3278100.png) 1. Follow the steps in the wizard to import your certificate. Your certificate will normally be in a .**PFX** file, and it should include both the **public** X.509 certificate, and the **private key** for the certificate. 1. Once the certificate is imported, double click the certificate to bring up the properties. You should see an icon indicating that the private key has also been imported: ![](/docs/img/security/exposing-octopus/images/3278099.png) 1. If all these requirements have been met (**private key** imported, either the **Web Hosting** or **Personal** stores, in the **Local Computer** scope), the certificate should appear when you select to use an existing certificate when adding your HTTPS binding: ![](/docs/img/security/exposing-octopus/images/ssl.png) ### Let Octopus generate a self-signed certificate If you are testing Octopus, and don't want to use an existing certificate nor Let's Encrypt, Octopus can generate a self-signed certificate for you. This certificate will not be trusted by your web browser, but it will let you test Octopus over a secure HTTPS connection. 1. [Follow the steps below](#change-web-portal-bindings) to set up an HTTPS binding, and click the **Generate new self-signed certificate** link in the wizard. 1. Select that self-signed certificate for your HTTPS binding. 1. You will need to ignore any messages in your browser about being an untrusted SSL certificate and continue to the site. ## Changing your web portal bindings manually {#change-web-portal-bindings} If you are bringing your own SSL certificate, or want to configure a complex set of HTTP/HTTPS bindings, the easiest way to do this is using the Octopus Server Manager. 1. Open the **Octopus Manager** application on the Octopus Server. You'll find this in the start menu. ![](/docs/img/security/exposing-octopus/images/3278103.png) 1. From Octopus Manager, you can launch a wizard to modify the bindings that are associated with the Octopus Web Portal: ![](/docs/img/security/exposing-octopus/images/bindings.png) 1. In the Web Bindings wizard, click **Add...** to add a new binding, and choose the HTTPS scheme. Other options such as the port can also be configured here. ![](/docs/img/security/exposing-octopus/images/addingssl.png) Since HTTPS requires an SSL certificate, you can either choose to generate a new, self-signed (untrusted) certificate, or to select an existing certificate. Self-signed certificates are useful for testing or to achieve encryption without trust, but for production use we recommend using a trusted SSL certificate. 1. Follow the rest of the Wizard steps to add the binding and reconfigure the Octopus Server. ## Updating the SSL certificate of an existing web portal binding The approach for updating an existing binding requires that we take a slightly different approach. 1. Open the **Change bindings...** screen, as in [Changing Your Web Portal Bindings Manually](#change-web-portal-bindings) steps 1 & 2. 1. Select the binding that you are interested in updating the SSL Certificate for and click **Add...** to open the details. Note these details and click **OK** to return to the binding list. 1. Click **Remove** and then **Add...** to recreate the binding, using the details from the previous step. When selecting the SSL Certificate, select the desired certificate. Click **OK** to return to the bindings list. ![](/docs/img/security/exposing-octopus/images/updatessl.png) > At this point, the bindings have not changed yet, as we haven't yet applied this change to the server. 1. To apply this change to the server, follow the rest of the Wizard steps to add the binding and reconfigure the Octopus Server. Once the **Apply** button is clicked, a script is generated to update the Web Portal Binding with your new SSL certificate. You can review this script, prior to running it, by clicking on the **Show script** link. ## Forcing HTTPS {#ForcingHTTPS} A common scenario when hosting the Octopus Server is to redirect all requests initiated over HTTP to HTTPS. With this configuration you can navigate to the Octopus Server using either the `http://` or `https://` scheme, but have Octopus automatically redirect all `http://` requests to use the equivalent `https://` route. 1. Configure binding(s) for `http://` - this allows browsers to initiate their request over HTTP so Octopus can then redirect to HTTPS. 1. Configure SSL binding(s) for `https://` using the correct SSL certificate. 1. Test you can use Octopus with either `http://` or `https://` schemes without being redirected (the scheme stays the same) - this proves both endpoint bindings are working as expected. 1. Configure Octopus to `Redirect HTTP requests to HTTPS` - you can do this using the Octopus Server Manager application where you configure the bindings as soon as you have configured an HTTPS binding. ![](/docs/img/security/exposing-octopus/images/forcessl.png) ## HTTP strict transport security (HSTS) {#hsts} HTTP Strict Transport Security is an HTTP header that can be used to tell the web browser that it should only ever communicate with the website using HTTPS, even if the user tries to use HTTP. This allows you to lessen the risk of a man-in-the-middle (MITM) attack or a HTTP downgrade attack. However, it is not a panacea - it still requires a successful connection on first use (ie, it does not resolve the Trust-On-First-Use (TOFU) issue). **Octopus 3.13** and above can send this header, but due to the potential pitfalls, it is opt-in. To switch it on, run the following commands on your Octopus Server: ``` PS \> Octopus.Server.exe configure --hstsEnabled=true --hstsMaxAge=31556926 PS \> Octopus.Server.exe service --stop --start ``` This will send the header on every HTTPS response, telling browsers to enforce HTTPS for 1 year (31556926 seconds) from the most recent request. :::div{.hint} We highly recommend using a short value for `hstsMaxAge`, like 1 hour (3600 seconds) until you are comfortable that it works in your environment. This way you can disable HSTS and browsers will return to normal after 1 hour. ::: :::div{.warning} Please note that enabling HSTS comes with its own challenges. For example: * Untrusted / self-signed certificates will not work with HSTS - the certificate chain needs to be fully trusted by the browser. * Your Octopus Server must be hosted on standard ports - HTTP on port 80 and HTTPS on port 443. * Reverting from HTTPS to HTTP will not be simple - each browser will need to be manually reconfigured to remove the HSTS entry. ::: # Upgrading host OS or .NET Source: https://octopus.com/docs/administration/upgrading/guide/upgrade-host-os-or-net.md Eventually, the server hosting Octopus Deploy or the .NET version installed will reach the end of life. From both a practical and security point of view, continuing to run Octopus Deploy on unsupported software is not recommended. But upgrades take time, from configuring to testing. And there is a risk of downtime. ## Recommended approach - leverage high availability If you have a Data Center or a Server license, it is possible to upgrade the host OS or .NET without downtime and with minimal risk. Those licenses support an unlimited number of high availability (HA) nodes. If you do not have HA configured, this is an excellent time to do it. There are numerous benefits, including horizontal scaling, a more robust CI/CD pipeline, and low friction maintenance. Please see our guide on [configuring high availability](/docs/administration/high-availability/configure). Once high availability is configured, the process to upgrade the host OS will be: 1. Create a new VM with the desired OS or .NET installed. 2. Install Octopus Deploy on that new VM and add it as a new node. 3. In the Octopus UI, go to **Configuration ➜ Nodes**, click the overflow menu (`...`) next to the new node you just created, and set the task cap to 0. The new node is now part of the HA cluster, but it isn't part of the load balancer, so it doesn't accept UI requests or processing tasks. At this point, you can slowly bring this new node online. ### Test the Octopus UI on the new node The first step is to test the Octopus UI to make sure it is responding correctly. To do that, you can follow this process. It is meant to take a little bit of time to reduce risk. If at any point something isn't working, please contact our [support team](https://octopus.com/support). 1. Navigate directly to the new VM and use Octopus Deploy as you'd regularly do for a few hours or days. 2. Assuming everything is working as expected, add the new node into the load balancer. If possible, configure the load balancer to only use the new node for 10 or 20% of all requests. 3. Assuming everything is working as expected, and no one is complaining, configure the load balancer to send traffic equally to all nodes. If you are unsure as to which node returned the UI request, you can check the network trace in your tooling; the node name is returned in the `Octopus-Node` response header. 4. If, at any point, something isn't working right, remove the new VM from the load balancer to investigate further. ### Have the new node process tasks Now that the new node is hosting UI requests without issue, it is time to move onto processing tasks. Just like with the UI, the idea is to ease the new VM into processing tasks. 1. Pick a time with minimal deployments. Change the task cap on all the existing nodes to 0, change the task cap on the new node to 5. Do a couple of test deployments and health checks. 2. Assuming the new node had no problem processing tasks, change the task cap on the new node back to 1. Change the task cap on all the other nodes back to the original value. Wait a few days and keep an eye out for any failures. 3. Assuming all the tasks the new node picks up are processed successfully, change the task cap to match all the other nodes. 4. If, at any point, something isn't working right, change the task cap back to zero to investigate further. ### Removing older nodes Wait a few days or weeks. If no oddities come up, go through the decommissioning process of the old nodes. ## Alternative approach - clone instance Configuring High Availability can take time. Or you might be on a license which doesn't support HA. The other option is to clone the instance and migrate over. ### Process Creating a clone of an existing instance involves: 1. Stop the current instance of Octopus Deploy. 1. Download the same version of Octopus Deploy as your current instance. 1. Installing that version on a new server and configure it to point to the existing database. 1. Copying all the files from the backed up folders from the source instance. 1. Test cloned instance. Verify all API scripts, CI integrations, and deployments work. 1. Migrate over to using a new instance. If anything goes wrong, stop the cloned instance, and start the old instance back up. ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. Once the MSI is finished, the **Octopus Manager** will automatically launch. Follow the wizard, and on the section where you configure the database, select the pre-existing database. :::figure ![](/docs/img/shared-content/upgrade/images/select-existing-database.png) ::: Selecting an existing database will ask you to enter the Master Key. :::figure ![](/docs/img/shared-content/upgrade/images/enter-master-key.png) ::: Enter the Master Key you backed up earlier, and the manager will verify the connection works. Finish the wizard, keep an eye on each setting to ensure you match your main instance. For example, if your main instance uses Active Directory, your cloned instance should also be configured to use Active Directory. After the wizard is finished and the instance is configured, log in to the cloned instance to ensure your credentials still work. ### Copy all the files from the main instance After the instance has been created, copy all the contents from the following folders. - _Artifacts_, the default is `C:\Octopus\Artifacts` - _Packages_, the default is `C:\Octopus\Packages` - _Tasklogs_, the default is `C:\Octopus\Tasklogs` - _EventExports_, the default is `C:\Octopus\EventExports` Failure to copy over files will result in: - Empty deployment screens - Missing packages on the internal package feed - Missing project or tenant images - Missing archived events - And more ### Backup the server folders The server folders store large binary data outside of the database. By default, the location is `C:\Octopus`. If you have High Availability configured, they will likely be stored on a NAS or some other file share. - **Packages**: The default location is `C:\Octopus\Packages\`. It stores all the packages in the internal feed. - **Artifacts**: The default location is `C:\Octopus\Artifacts`. It stores all the artifacts collected during a deployment along with project images. - **Tasklogs**: The default location is `C:\Octopus\Tasklogs`. It stores all the deployment logs. - **EventExports**: The default location is `C:\Octopus\EventExports`. It stores all the exported event audit logs. Any standard file-backup tool will work, even [RoboCopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy). Very rarely will an upgrade change these folders. The release notes will indicate if these folders are going to be modified. ### Migrating over to a new instance All the sensitive variables, certificates, and other items required to connect to your deployment targets are stored in the database. Assuming you are not using polling Tentacles (or if you are, the DNS name hasn't changed), everything should work out of the box. Start running some tests on the new instance to make sure the new host OS or .NET version hasn't broken anything. ### Considerations As you migrate your instance, here are few items to consider. 1. Will the new instance's domain name be the same or will it change? For example, will it change from https://octopusdeploy.mydomain.com to https://octopus.mydomain.com. If it changes and you are using polling Tentacles, you will need to create new Tentacle instances for the new Octopus Deploy instance. 2. What CI, or build servers, integrate with Octopus Deploy? Do those plug-ins need to be updated? You can find several of the plug-ins on the [downloads page](https://octopus.com/downloads). 3. Do you have any internally developed tools or scripts that invoke the Octopus API? We've done our best to maintain backward compatibility, but there might be some changes. 4. What components do you use the most? What does a testing plan look like? 5. Chances are there are new features and functionality you haven't been exposed to. How will you train people on the new functionality? If unsure, please [contact us](https://octopus.com/support) to get pointed in the right direction. ### Polling Tentacles A polling Tentacle can only connect to one Octopus Deploy instance. It connects via DNS name or IP address. If the new instance's DNS name changes - for example, the old instance was https://octopusdeploy.mydomain.com with the new instance set to https://octopus.mydomain.com - you'll need to clone each polling Tentacle instance. Each polling Tentacle will need to be cloned on each deployment target. To make things easier, we have provided [this script](https://github.com/OctopusDeployLabs/SpaceCloner/blob/master/CloneTentacleInstance.ps1) to help clone a Tentacle instance. That script will look at the source instance, determine the roles, environments, and tenants, then create a cloned Tentacle and register that cloned Tentacle with your cloned instance. :::div{.hint} Any script that clones a Tentacle instance must be run on the deployment target. It cannot be run on your development machine. ::: # Forking Git repositories Source: https://octopus.com/docs/best-practices/platform-engineering/forking-git-repos.md [Serializing and deploying CaC enabled projects](https://www.youtube.com/watch?v=VGgR4PuWvOQ) Octopus does not support two Config-as-Code (CaC) enabled projects pointing to the same Git repository. This means you must fork the Git repository hosting the upstream project and then point the downstream project to the new fork. The `GitHub - Fork Repo` step from the community step template library automates the process of forking repositories in GitHub. :::div{.hint} Other Git platforms may have CLI tools that allow repositories to be forked. ::: The typical process used to deploy an upstream CaC project serialized with octoterra is to run a step like `GitHub - Fork Repo` to fork the upstream Git repository before the `Octopus - Populate Octoterra Space` step. This ensures a new Git repository has been created for the downstream project. A CaC enabled project exported by octoterra exposes the CaC Git url as a Terraform variable. The variable is based on the name of the upstream project and ends with the `_git_url` suffix e.g. `project_frontend_webapp_git_url`. The default value of this variable is the upstream project's CaC Git repository. This Terraform variable must be defined when running the `Octopus - Populate Octoterra Space` by adding it to the `Terraform Additional Apply Params` field e.g `-var=project_frontend_webapp_git_url=#{Octopus.Action[GitHub - Fork Repo].Output.NewRepo}`. # Azure app service environments Source: https://octopus.com/docs/deployments/azure/ase.md This guide covers deploying apps to Azure [App Service Environments](https://docs.microsoft.com/en-au/azure/app-service/environment/intro). It does not cover how to deploy/setup an ASE itself. From an Octopus user perspective, deploying to an ASE is usually no different to deploying to any other app service in Azure. While the app services within an ASE are isolated, the management interface for managing them and deploying to them is usually the same as for any other app service. ## Internal app service environments Internal ASEs are where the wheels usually come off for deploying from Octopus. An [internal ASE](https://docs.microsoft.com/en-us/azure/app-service/environment/create-ilb-ase) is one that has an Internal Load Balancer (ILB). By definition it cannot be accessed from the Internet because it's designed to host internal apps (i.e. intranet apps). Given that you can't access the app, or its management endpoint (Kudu), from the Internet you can't deploy to it from Octopus without some network setup. To help explain what's required let's look at what happens when you deploy to a web app in Azure from Octopus. 1. Octopus Server creates a deployment task. 2. Task scheduler picks up the task and hands the work over to Calamari. 3. Calamari picks up all the information about the deployment and connects to Azure. 4. Calamari locates the resource group that's being deployed to (there's a reason we do it this way, see [below](#resource_groups)). 5. Calamari locates the web app within the resource group and requests its publish profile from Azure. 6. Calamari then hands over to the [DeploymentManager](https://msdn.microsoft.com/en-us/library/microsoft.web.deployment.deploymentmanager(v=vs.90).aspx).[SyncTo](https://msdn.microsoft.com/en-us/library/dd543271(v=vs.90).aspx) method to actually do the deployment. Contained in the publish profile is the URI of the deployment endpoint (Kudu) for the web app. This is the critical piece here. For an external ASE that URI will be publicly accessible (e.g. https://your-app.scm.aseName.p.azurewebsites.net). For an internal ASE the URI will not be publicly accessible, it will be something like `https://your-app.scm.your-domain` This is where the deployments will fail, they will be able to see all other Urls required but when they get to step 6 Octopus won't be able to resolve the address for the URI. To fix that you need 2 things to happen. First, the network the Octopus Server is on has to be connected to the ASE's VNet, e.g. using ExpressRoute or a VPN. Second, the Octopus Server needs to be able to resolve `your-app.scm.your-domain` to the Internal Load Balancer IP address of your Azure ILB (found in the **IP addresses** for the ASE in the Azure portal), e.g. through DNS configuration. Exactly how to do those 2 things will depend on your organization, what infrastructure you might already have in place and is beyond the scope of this guide. ## Resource groups We mentioned above that Calamari locates the resource group first and then locates the web app. If you're familiar with the Azure API then you may know that you can also list the web apps for a subscription directly. So why don't we just do that and save a step? It's because of a side effect of the isolation provided by an ASE. Usually when you create a web app in Azure its name must be unique. This isn't the case when the web app is in an ASE, it only has to be unique within the ASE. If you call for the list of web apps then you get back 2 or more with the same name that you can't distinguish between. By having the resource group we can call a different API that lets us list just the web apps in that resource groups, which we know must be uniquely named. This is the reason why you see a resource group and a web app name when using binding on the Octopus Web App step, we need the resource group to differentiate web apps with the same name. When you aren't using binding the drop down list is doing this too behind the scenes. This is also why using a [principal of least privilege on a Service Principal](/docs/infrastructure/accounts/azure/#note_on_least_privilege) is a little complicated. ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # Tentacle VM extension configuration structure Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/configuration-structure.md :::div{.problem} The VM extension is deprecated and no longer supported. All customers using the VM extension should migrate to [DSC](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). ::: These files are required to install the extension [via the Azure CLI](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-the-azure-cli/) or [via PowerShell](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-powershell). ## Public settings The schema for the public configuration file is: ```json { "OctopusServerUrl": "https://octopus.example.com", "Environments": [ "Test", "Production" ], "Roles": [ "web-server", "app-server" ], "Tenants": [ "Acme Corp" ], "TenantTags": [ "Tenant type/External", "Upgrade ring/Early adopter" ], "MachinePolicy": "Transient machines", "CommunicationMode": "Listen", "Port": 10933, "PublicHostNameConfiguration": "PublicIP|FQDN|ComputerName|Custom", "CustomPublicHostName": "web01.example.com" } ``` * `OctopusServerUrl`: (string) The url to the Octopus Server portal. * `Environments`: (array of string) The environments to which the Tentacle should be added. * `Roles`: (array of string) The [target tags](/docs/infrastructure/deployment-targets/target-tags) to assign to the Tentacle. * `CommunicationMode`: (string) Whether the Tentacle should wait for connections from the server (`Listen`) or should poll the server (`Poll`). * `Tenants`: (array of string) The tenants to assign to the Tentacle. * `TenantTags`: (array of strings) The tenant tags in [canonical name format](/docs/tenants/tenant-tags/#referencing-tenant-tags) to assign to the Tentacle. * `MachinePolicy`: (string) The name of a machine policy to apply to the Tentacle. * `Port`: The port on which to listen for connections from the server (in `Listen` mode), or the port on which to connect to the Octopus Server (`Poll` mode). * `PublicHostNameConfiguration`: If in listening mode, how the server should contact the Tentacle. Can be one of the following: * `PublicIP` - looks up the public IP address using [api.ipify.org](https://api.ipify.org). * `FQDN` - concatenates the local hostname with the (active directory) domain name. Useful for domain joined computers. * `ComputerName` - uses the local hostname. * `Custom` - allows you to specify a custom value, using the `CustomPublicHostName` property. * `CustomPublicHostName`: If in listening mode, and `PublicHostNameConfiguration` is set to `Custom`, the address that the server should use for this Tentacle. :::div{.hint} In `Listen` mode, the extension will automatically add a Windows Firewall rule to allow inbound traffic, but you will still need to ensure that [endpoints](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/classic/setup-endpoints) / [NSG rules](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg) are added to allow network traffic from the Octopus Server to the Tentacle. The Tentacle will also need to be able to reach the Octopus Server portal to register the Tentacle. Once registered, this is no longer required. ::: ## Private settings The schema for the private configuration file is: ```json { "ApiKey": "API-YOUR-KEY" } ``` * `ApiKey`: (string) The Api Key to use to connect to the Octopus Server. The private configuration will be encrypted by Azure, and is only decryptable on the Azure VM using a special certificate installed by the Azure VM Agent. # Automating Octopus installation Source: https://octopus.com/docs/installation/automating-installation.md Octopus comes in a MSI that can be deployed via group policy or other means. ## Downloads You can use the permanent link below to download the Octopus Server. ### Download the Octopus Server The latest version of the Octopus Deploy Server can be downloaded from the [downloads page](https://octopus.com/downloads). We recommend using the [latest version](https://octopus.com/downloads) of the Octopus Deploy Server. If you need an older version of the Octopus Deploy Server, visit the [previous downloads page](https://octopus.com/downloads/previous) to locate the specific version you need. Automating the installation of Octopus Server is a three step process. ### 1. Install the MSI on a temporary machine interactively In this step we install the MSI on a machine interactively so that we can complete the wizard to add a new instance. Follow all the steps in the [installation process](/docs/installation/#install-octopus), but in the final step copy the generated script into a new file. **Do not click Install**. Save the script into a new file. Here is an example of what the script might look like: ```bash "[INSTALLLOCATION]\Octopus.Server.exe" create-instance --instance "" --config "" --serverNodeName "" "[INSTALLLOCATION]\Octopus.Server.exe" database --instance "" --connectionString "" --create "[INSTALLLOCATION]\Octopus.Server.exe" configure --instance "" --upgradeCheck "True" --upgradeCheckWithStatistics "True" --usernamePasswordIsEnabled "True" --webForceSSL "False" --webListenPrefixes "" --commsListenPort "10943" --grpcListenPort "8443" "[INSTALLLOCATION]\Octopus.Server.exe" service --instance "" --stop "[INSTALLLOCATION]\Octopus.Server.exe" admin --instance "" --username "" --email "" --password "" "[INSTALLLOCATION]\Octopus.Server.exe" license --instance "" --licenseBase64 "" "[INSTALLLOCATION]\Octopus.Server.exe" service --instance "" --install --reconfigure --start --dependOn "MSSQLSERVER" ``` ### 2. Install MSI silently To install the MSI silently: ```bash msiexec /i Octopus..msi /quiet RUNMANAGERONEXIT=no ``` By default, the Octopus files are installed under **%programfiles%**. To change the installation directory, you can specify: ```bash msiexec /i Octopus..msi /quiet RUNMANAGERONEXIT=no INSTALLLOCATION="" ``` ### 3. Configuration The MSI installer simply extracts files and adds some shortcuts and event log sources. The actual configuration of Octopus Server is done later, via the script you saved above. To run the script start an admin shell prompt and execute the script, this should apply all the settings to the new instance. ## Desired State Configuration Octopus can also be installed via Desired State Configuration (DSC). Using the module from the [OctopusDSC GitHub repository](https://www.powershellgallery.com/packages/OctopusDSC). The following PowerShell script will install an Octopus Server listening on port `80`. Make sure the OctopusDSC module is on your `$env:PSModulePath`: ```powershell Configuration SampleConfig { Import-DscResource -Module OctopusDSC Node "localhost" { cOctopusServer OctopusServer { Ensure = "Present" State = "Started" # Server instance name. Leave it as 'OctopusServer' unless you have more than one instance Name = "OctopusServer" # The url that Octopus will listen on WebListenPrefix = "http://localhost:80" SqlDbConnectionString = "Server=(local)\SQLEXPRESS;Database=Octopus;Trusted_Connection=True;" # The admin user to create OctopusAdminUsername = "admin" OctopusAdminPassword = "" # optional parameters AllowUpgradeCheck = $true AllowCollectionOfAnonymousUsageStatistics = $true ForceSSL = $false ListenPort = 10943 DownloadUrl = "https://octopus.com/downloads/latest/WindowsX64/OctopusServer" } } } # Execute the configuration above to create a mof file SampleConfig # Run the configuration Start-DscConfiguration -Path ".\SampleConfig" -Verbose -wait # Test the configuration ran successfully Test-DscConfiguration ``` ### Settings and properties To review the latest available settings and properties, refer to the [OctopusDSC Server readme.md](https://github.com/OctopusDeploy/OctopusDSC/blob/master/README-cOctopusServer.md) in the GitHub repository. ## Taking DSC further DSC can be applied in various ways, such as [Group Policy](https://sdmsoftware.com/group-policy-blog/desired-state-configuration/desired-state-configuration-and-group-policy-come-together/), a [DSC Pull Server](https://docs.microsoft.com/en-us/powershell/scripting/dsc/pull-server/pullserver), [Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview), or even via configuration management tools such as [Chef](https://docs.chef.io/resources/dsc_resource/) or [Puppet](https://github.com/puppetlabs/puppetlabs-dsc). Learn more about Desired State Configuration at [Windows PowerShell Desired State Configuration ](https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview). # Troubleshooting the Octopus installation Source: https://octopus.com/docs/installation/troubleshooting.md In a few cases a bug in a 3rd party component causes the installer to display an "Installation directory must be on a local hard drive" error. If this occurs, run the install again from an elevated command prompt using the following command (replacing `Octopus.3.3.4-x64.msi` with the name of the installer you are using): `msiexec /i Octopus.3.3.4-x64.msi WIXUI_DONTVALIDATEPATH="1"` :::div{.warning} **Deploying Applications to an Azure Website?** If you get the following error it means you have a local copy of Web Deploy and that is being used. You will either need to upgrade your local version of Web Deploy to 3.5 or greater, or uninstall the local copy so Octopus can reference the embedded copy. ::: ## Long paths In Server 2016 and Windows 10, Microsoft has added an option to remove the character limit for file paths. As of **Octopus 2018.5.3** and **Tentacle 3.21.0**, most operations support long file names once enabled in Windows, including package extraction and retention. ## Enabling On the target machine: 1. Ensure .NET Framework 4.6.2 or later is installed. 1. Open Group Policy Editor (Press Windows Key and type `gpedit.msc` and hit Enter key). 1. Navigate to and enable. - On the latest versions of Windows: **Local Computer Policy ➜ Computer Configuration ➜ Administrative Templates ➜ System ➜ Filesystem** and set the `Enable Win32 long paths` setting to `Enabled`. - On Server 2016 and Windows 10 without the latest updates: **Local Computer Policy ➜ Computer Configuration ➜ Administrative Templates ➜ System ➜ Filesystem ➜ NTFS** and set the `Enable NTFS long paths` setting to `Enabled`. Once this option is on, PowerShell scripts automatically support long file names. ## Limitations - C# and F# scripts do not support long filenames. - Windows limits the each component of the path to 255 characters. - Due to how we store and transfer packages, PackageIDs are limited to 100 characters and Package ID and Version combined to 216 characters. - The package extraction path (`\\`) must be less than 256 characters long. - The path to the directory of any script file being run by the deployment must be less than 256 characters long. ## Learn more - [Octopus installation](/docs/installation) # octopus account azure list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-azure-list.md List Azure subscription accounts in Octopus Deploy ```text Usage: octopus account azure list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account azure list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Configuring Okta Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-okta.md ## Configure Okta [How to configure Okta](/docs/security/authentication/okta-authentication#configure-okta). ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** should be `groups`. - **Username Claim Type** should be `preferred_username`. - **Resource** should be left unset. - **Scopes** should be `openid profile email groups`. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer** should be a URL like `https://your-okta-poral.okta.com/oauth2/default`. You can also find it in the [OpenID Connect metadata](/docs/security/authentication/okta-authentication#check-openid-connect-metadata-is-working). - **Client ID** and **Client secret** should be the values you noted when creating the application. You can also find them in the Okta portal page for your application. :::div{.hint} Note that the value of **Client Secret** cannot be retrieved once set - it can only be changed or deleted ::: - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. # Troubleshooting AWS transport level errors Source: https://octopus.com/docs/support/troubleshooting-aws-transport-error.md :::div{.warning} **Information subject to change** The information on this page is to be considered incomplete and relies heavily on external links that are subject to change. We will endeavor as much as possible to keep these links and issues up to date. ::: ## Traffic on non-standard ports When Octopus is hosted on an AWS instance it can appear that some requests are coming in on non-standard ports even if custom bindings have not been set. You may see reports like the below appearing on your WAF firewall: ```html Unhandled error on request: http://my.octopus.com:8080/api/progression/Projects-1 123456789fe145449dc66ef65f1386cd by ... ``` This issue has been reported as resolved previously by disabling [TCP offloading](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/pvdrivers-troubleshooting.html#citrix-tcp-offloading). # Creating a test instance Source: https://octopus.com/docs/administration/upgrading/guide/creating-test-instance.md There is always a risk an in-place upgrade will fail. The risk increases as more versions have been released between upgrades. We do our best to test many different configurations to put downward pressure on that risk. However, we can't cover every hyper-specific scenario. Also, there might be a new feature, or a breaking change introduced. Creating a test instance will help you reduce the risk for you and your company. ## Overview There are two approaches to creating a test instance: 1. Subset of projects representing the main instance. 1. Clone of the main instance. ## Test instance with a subset of projects Setting up a test instance with a subset of projects over a full clone has several advantages. - It is much easier to set up and configure than a clone. It is also easy to automate the setting up and tearing down a test instance on demand. - There is tooling in place to export specific projects. - A clone does a full clone of everything, including deployment targets and triggers, increasing the risk of having the sandbox instance connect to production when the instance initially starts up. You will need to disable all the targets to prevent this from happening. - A clone might have hidden configuration options, such as server folders, that you have to change. The disadvantage of a subset of projects over a full clone is that there could be significant drift between projects, and you might miss something. The process to create an instance with a subset of projects is: 1. Download the same version of Octopus Deploy as your main instance. 1. Install Octopus Deploy on a new VM. 1. Export a subset of projects from the main instance. 1. Import that subset of projects to the test instance. 1. Download the latest version of Octopus Deploy. 1. Upgrade the test instance to the latest version of Octopus Deploy. 1. Test and verify the test instance. ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. After you install Octopus Deploy, the Octopus Manager will automatically launch. Follow the wizard. A few notes: 1. You can reuse your same license key on up to three unique instances of Octopus Deploy. We determine uniqueness based on the database it connects to. If you are going to exceed the three instance limit, please [contact us](https://octopus.com/support) to discuss your options. 1. Create a new database for this test instance. Restoring a backup will cause Octopus to treat this as a cloned instance, with the same targets, certificates, and keys. 1. Run the test instance database on the same version of SQL Server as the main instance. Only deviate when you plan on upgrading SQL Server. ### Export/import subset of projects using export/import projects feature The Export/Import Projects feature added in **Octopus Deploy 2021.1** can be used to export/import projects to a test instance. Please see the up to date [documentation](/docs/projects/export-import) to see what is included. ### Export subset of projects using the data migration tool All versions of Octopus Deploy since version 3.x has included a [data migration tool](/docs/administration/data/data-migration/). The Octopus Manager only allows for the migration of all the data. We only need a subset of data. Use the [partial export](/docs/octopus-rest-api/octopus.migrator.exe-command-line/partial-export) command-line option to export a subset of projects. Run this command for each project you wish to export on the main, or production, instance. Create a new folder per project: ``` Octopus.Migrator.exe partial-export --instance=OctopusServer --project=AcmeWebStore --password=5uper5ecret --directory=C:\Temp\AcmeWebStore --ignore-history --ignore-deployments --ignore-machines ``` :::div{.hint} This command ignores all deployment targets to prevent your test instance and your main instance from deploying to the same targets. ::: ### Import subset of projects using the data migration tool The data migration tool also includes [import functionality](/docs/octopus-rest-api/octopus.migrator.exe-command-line/import). First, copy all the project folders from the main instance to the test instance. Then run this command for each project: ``` Octopus.Migrator.exe import --instance=OctopusServer --password=5uper5ecret --directory=C:\Temp\AcmeWebStore ``` ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Install the latest version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. ## Test instance is a clone Setting up a test instance as a clone of the main instance has a few advantages over a subset of projects. - Upgrades are much closer to a one to one comparison. - You can do a full test of all integrations with Octopus Deploy. Configuring a clone typically takes much more time and compute resources. There are backup locations to consider, targets to disable, and more. Creating a clone of an existing instance involves: 1. Finding and backing up your Master Key. 1. Enabling maintenance mode on the main instance. 1. Backing up the database of the main instance. 1. Disabling maintenance mode on the main instance. 1. Restoring the backup of the main instance's database as a new database on the desired SQL Server. 1. Downloading the same version of Octopus Deploy as your main instance. 1. Installing that version on a new server and configuring it to point to the cloned database. 1. Copying all the files from the backed up folders from the source instance. 1. [**Updating the Installation ID**](/docs/administration/upgrading/guide/creating-test-instance#update-the-instance-id) on the cloned instance (required). 1. *Optionally*, disabling targets on the cloned instance. 1. Download the latest version of Octopus Deploy. 1. Upgrade the test instance to the latest version of Octopus Deploy. 1. Test and verify the test instance. ### Finding and backing up your Master Key When connecting an existing Octopus database to a "new" Octopus instance, either due to migration or for testing, you will need the Master Key to gain access to the database during the Octopus instance setup process. To obtain your existing Master Key from your source Octopus instance, you can simply open Octopus Manager then select "View master key" as shown below: :::figure ![](/docs/img/upgrade/images/view-master-key.png) ::: Alternatively, you may also use the `show-master-key` command via the `Octopus.Server.exe` command-line tool. [You can find more information on this here](https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/show-master-key). For Octopus Server instances hosted in Linux containers, you may use the `container exec` command [per the instructions here](https://octopus.com/docs/installation/octopus-server-linux-container#upgrading). ### Maintenance mode Maintenance mode prevents non-Octopus Administrators from doing deployments or making changes. To enable maintenance mode go to **Configuration ➜ Maintenance** and click the button `Enable Maintenance Mode`. To disable maintenance mode, go back to the same page and click on `Disable Maintenance Mode`. ### Backup the SQL Server database Always back up the database before upgrading Octopus Deploy. The most straightforward backup possible is a full database backup. Execute the below T-SQL command to save a backup to a NAS or file share. ``` BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` The `BACKUP DATABASE` T-SQL command has dozens of various options. Please refer to [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server?view=sql-server-ver15) or consult a DBA as to which options you should use. ### Restore backup of database Use SQL Server Management Studio's (SSMS) built-in restore backup functionality. SSMS provides a wizard to make this process as pain-free as possible. Be sure to consult a DBA or read up on [Microsoft's Documentation](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-database-to-a-new-location-sql-server?view=sql-server-ver15). ### Downloading the same version of Octopus Deploy Migrating data from Octopus to a test instance requires both the main instance and test instance to be on the same version. You can find the version you are running by clicking on your name in the top right corner of your Octopus Deploy instance. :::figure ![](/docs/img/shared-content/upgrade/images/find-current-version.png) ::: You can find all the previous versions on the [previous versions download page](https://octopus.com/downloads/previous). ### Installing Octopus Deploy Run the MSI you downloaded to install Octopus Deploy. Once the MSI is finished, the **Octopus Manager** will automatically launch. Follow the wizard, and on the section where you configure the database, select the pre-existing database. :::figure ![](/docs/img/shared-content/upgrade/images/select-existing-database.png) ::: Selecting an existing database will ask you to enter the Master Key. :::figure ![](/docs/img/shared-content/upgrade/images/enter-master-key.png) ::: Enter the Master Key you backed up earlier, and the manager will verify the connection works. Finish the wizard, keep an eye on each setting to ensure you match your main instance. For example, if your main instance uses Active Directory, your cloned instance should also be configured to use Active Directory. After the wizard is finished and the instance is configured, log in to the cloned instance to ensure your credentials still work. ### Copy all the files from the main instance After the instance has been created, copy all the contents from the following folders. - _Artifacts_, the default is `C:\Octopus\Artifacts` - _Packages_, the default is `C:\Octopus\Packages` - _Tasklogs_, the default is `C:\Octopus\Tasklogs` - _EventExports_, the default is `C:\Octopus\EventExports` Failure to copy over files will result in: - Empty deployment screens - Missing packages on the internal package feed - Missing project or tenant images - Missing archived events - And more ### Update the Instance ID {#update-the-instance-id} :::div{.warning} **You must update the Installation ID after cloning.** Failing to do so means your cloned instance will report telemetry under the same ID as your original instance. This corrupts our usage data and prevents us from accurately understanding how many installations exist. Please do not skip this step. ::: Cloning an instance includes the unique Installation ID of your original instance. This ID is used to identify the instance by a few integrations and sending telemetry reports. You can run this SQL Script on your cloned instance database to generate a new unique installation ID. ```sql DECLARE @config NVARCHAR(1000) DECLARE @oldguid NVARCHAR(255) DECLARE @newguid NVARCHAR(255) DECLARE @dryRun BIT = 1 -- set this to 0 to update the Installation Id SET @newguid = LOWER(CONVERT(NVARCHAR(255), NEWID())) SELECT @config =[JSON] FROM dbo.Configuration WHERE Id = 'upgradeavailability' SET @oldguid = JSON_VALUE(@config, '$.InstallationId') SET @config = JSON_MODIFY(@config, '$.InstallationId', @newguid) PRINT 'The old Installation Id is ' + @oldguid + ' - Save this value' PRINT 'The new Installation Id will be ' + @newguid IF @dryRun = 1 PRINT 'This is a dry run, no update is occurring. Set @dryrun to 0 to update the Installation Id.' ELSE PRINT 'The Installation Id is being updated. Restart your Octopus Server service for this change to take effect.' UPDATE dbo.Configuration SET [JSON] = @config WHERE Id = 'upgradeavailability' AND @dryRun = 0 ``` :::div{.hint} The script is set to do a dry run of what will change. Change @dryRun to 0 to make the change on your instance. ::: ### Disabling all Targets/Workers/Triggers/Subscriptions - optional Cloning an instance includes cloning all certificates. Assuming you are not using polling Tentacles, all the deployments will "just work." That is by design if the VM hosting Octopus Deploy is lost and you have to restore Octopus Deploy from a backup. Just working does have a downside, as you might have triggers and other items configured. These items could potentially perform deployments. You can run this SQL Script on your cloned instance to disable everything. ```sql Use [OctopusDeploy] go DELETE FROM OctopusServerNode IF EXISTS (SELECT null FROM sys.tables WHERE name = 'OctopusServerNodeStatus') DELETE FROM OctopusServerNodeStatus UPDATE Subscription SET IsDisabled = 1 UPDATE ProjectTrigger SET IsDisabled = 1 UPDATE Machine SET IsDisabled = 1 IF EXISTS (SELECT null FROM sys.tables WHERE name = 'Worker') UPDATE Worker SET IsDisabled = 1 DELETE FROM ExtensionConfiguration WHERE Id in ('authentication-octopusid', 'jira-integration') ``` :::div{.hint} Remember to replace `OctopusDeploy` with the name of your database. ::: ### Downloading the latest version of Octopus Deploy The [downloads page](https://octopus.com/downloads) will always have the latest version of Octopus Deploy. If company policy dictates you install an older version, for example, the latest version is 2020.4.11, but you can only download 2020.3.x, then visit the [previous downloads page](https://octopus.com/downloads/previous). ### Install the latest version of Octopus Deploy Installing a newer version of Octopus Deploy is as simple as running MSI and following the wizard. The MSI will copy all the binaries to the install location. Once the MSI is complete, it will automatically launch the `Octopus Manager`. ### Testing the upgraded instance It is up to you to decide on the level of testing you wish to perform on your upgraded instance. At a bare minimum, you should: - Do test deployments on projects representative of your instance. For example, if you have IIS deployments, do some IIS deployments. If you have Java deployments, do some Java deployments. - Check previous deployments, ensure all the logs and artifacts appear. - Ensure all the project and tenant images appear. - Run any custom API scripts to ensure they still work. - Verify a handful of users can log in, and that their permissions are similar to before. - Build server integration; ensure all existing build servers can push to the upgraded server. We do our best to ensure backward compatibility, but it's impossible to cover every user scenario for every possible configuration. If something isn't working, please capture all relevant screenshots and logs and send them over to our [support team](https://octopus.com/support) for further investigation. # Finding drift Source: https://octopus.com/docs/best-practices/platform-engineering/listing-downstream-drift.md When upstream and downstream projects are [configured with CaC and backed by forked repositories](/docs/platform-engineering/forking-git-repos) it becomes possible to track drift. The `Octopus - Find CaC Updates` steps detect drift by: 1. Scanning the workspaces in the Terraform state created when deploying downstream projects 2. Finding any CaC enabled projects 3. Cloning the downstream Git repo 4. Checking to see if there are changes to merge from the upstream repo into the downstream repo, and if any merges introduce conflicts Each `Octopus - Find CaC Updates` step is configured with a specific Terraform backend. For example, the `Octopus - Find CaC Updates (S3 Backend)` step is configured to read Terraform state persisted in an S3 bucket. The `Octopus - Find CaC Updates` steps are typically defined in a runbook attached to the upstream project: 1. Create a runbook called `__ Find CaC Updates` attached to the upstream project. 2. Add one of the `Octopus - Find CaC Updates` steps. 1. Run the step on a worker with a recent version of Terraform installed or set the container image to a Docker image with Terraform installed like `octopuslabs/terraform-workertools`. 2. Set the `Git Username` field to the Git repository username. GitHub users with access tokens set this field to `x-access-token`. 3. Set the `Git Password` field to the Git repository password or access token. 4. Set the `Git Protocol` field to either `HTTP` or `HTTPS`. All publicly hosted Git platforms use `HTTPS`. 5. Set the `Git Hostname` field to the Git repository host name e.g. `github.com`, `gitlab.com`, `bitbucket.com`. 6. Set the `Git Organization` field to the Git repository owner or organization. 7. Set the `Git Template Repo` field to the Git repository hosting the upstream project. 8. Each `Octopus - Find CaC Updates` step then defines additional fields related to the specific Terraform backend. For example, the `Octopus - Find CaC Updates (S3 Backend)` step has fields for AWS credentials, region, bucket, and key. Executing the runbook will display a list of downstream projects and indicate if they are: * Up to date with the upstream repository * Can merge upstream changes automatically * Must resolve a merge conflict to merge upstream changes # Diagnosing Tentacle VM extension issues Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/diagnosing-issues.md :::div{.problem} The VM extension is deprecated and no longer supported. All customers using the VM extension should migrate to [DSC](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). ::: ## Diagnosing issues If, for some reason, the machine fails to register after 20 minutes, you can access logs on the VM to determine what went wrong. 1. Use the **connect** button on the VM to set up a remote desktop connection. ![Connecting to a VM via RDP](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/diagnosing-issues-connect-via-rdp.png) For more information, see [How to Log on to a Virtual Machine](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/connect-logon). 2. In the remote desktop session, open Windows Explorer, and browse to `C:\WindowsAzure\Logs\Plugins\OctopusDeploy.Tentacle.OctopusDeployWindowsTentacle\` and into the versioned folder below that. 3. In this folder, you'll find a number of text files. Open these to view the output of the commands, and look for any error messages. ![Windows Explorer - logs folder](/docs/img/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/diagnosing-issues-logs-folder.png) The `OctopusAzureVmExtension*` file is usually the best place to look. If there are no error messages or you are unable to troubleshoot the problem, please send a copy of these log files, a copy of the files from `C:\Packages\Plugins\OctopusDeploy.Tentacle.OctopusDeployWindowsTentacle` and a description of how the VM was configured to [our support team](https://octopus.com/support), and we'll be happy to help! # Octopus Server Linux Container Source: https://octopus.com/docs/installation/octopus-server-linux-container.md Running Octopus Server inside a container lets you avoid installing Octopus directly on top of your infrastructure and makes getting up and running with Octopus as simple as a one line command. Upgrading to the latest version of Octopus is just a matter of running a new container with the new image version. We are confident in the Octopus Server Linux Container's reliability and performance. [Octopus Cloud](/docs/octopus-cloud) runs the Octopus Server Linux Container in AKS clusters in Azure. But to use the Octopus Server Linux Container in Octopus Cloud, we had to make some design decisions and level up our knowledge about Docker concepts. We recommend the use of the Octopus Server Linux Container if you are okay with **all** of these conditions: - You are familiar with Docker concepts, specifically around debugging containers, volume mounting, and networking. - You are comfortable with one of the underlying hosting technologies for Docker containers; Kubernetes, ACS, ECS, AKS, EKS, or Docker itself. - You understand Octopus Deploy is a stateful, not a stateless application, requiring additional monitoring. We publish `linux/amd64` Docker images for each Octopus Server release and they are available on [DockerHub](https://hub.docker.com/r/octopusdeploy/). :::figure ![Introducing the Octopus Server Linux Docker image](/docs/img/installation/octopus-server-linux-container/octopus-linux-docker-image.png) ::: This page describes how to run Octopus Server in the Linux Container. **Note:** When using Linux containers on a Windows machine, please ensure you have [switched to Linux Containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers). ## Getting started Although there are a few different configuration options, the following is a simple example of starting the  Octopus Server Linux container: ```bash $ docker run --interactive --detach --name OctopusDeploy --publish 1322:8080 --env ACCEPT_EULA="Y" --env DB_CONNECTION_STRING="..." octopusdeploy/octopusdeploy ``` - We run in detached mode with `--detach` to allow the container to run in the background. - The `--interactive` argument ensures that `STDIN` is kept open, which is required since this is what the running `Octopus.Server.exe` process is waiting on to close. - Setting `--name OctopusServer` gives us an easy-to-remember name for this container. This is optional, but we recommend you provide a name that is meaningful to you, as that will make it easier to perform actions on the container later if necessary. - Using `--publish 1322:8080` maps the _container port_ `8080` to `1322` on the host so that the Octopus instance is accessible outside this server. - To set the connection string we provide an _environment variable_ `DB_CONNECTION_STRING` (this can be a local or external database). In this example, we run the image `octopusdeploy/octopusdeploy` without an explicit tag, running the `latest` version of Octopus Server that's been published to DockerHub. ## Running Octopus Server in a Container This section walks through some of the different ways you can run the Octopus Server Linux Container, from `docker compose` to using a full orchestration service such as Kubernetes: - [Octopus Server Container with Docker Compose](/docs/installation/octopus-server-linux-container/docker-compose-linux) - [Octopus Server Container with systemd](/docs/installation/octopus-server-linux-container/systemd-service-definition) - [Octopus Server Container in Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) ## Migration You may already have Octopus Server running on Windows Server or in a Windows container you wish to run in a Linux Container. This section walks through the different options and considerations for migrating to an Octopus Server Linux Container. - [Migrate to Octopus Server Linux Container from Windows Server](/docs/installation/octopus-server-linux-container/migration/migrate-to-server-container-linux-from-windows-server) - [Migrate to Octopus Server Linux Container from Windows Container](/docs/installation/octopus-server-linux-container/migration/migrate-to-server-container-linux-from-windows-container) ## Configuration :::div{.hint} Support for authentication providers differs depending on how you host Octopus Server. Please see our [authentication provider compatibility section](/docs/security/authentication/auth-provider-compatibility) to ensure any existing authentication provider is supported when running Octopus in a Linux Container. ::: When running an Octopus Server Image, you can supply the following values to configure the running Octopus Server instance. ### Master Key If you do not specify a master key when Octopus is first run, Octopus will generate one for you, which you must pass as the `MASTER_KEY` environment variable with each subsequent run. However, it is also possible to create your own master key for Octopus to use when configuring the database. Master keys must be a 128 bit string encoded in base 64. You can generate a random string to use as the master key with the command: ``` openssl rand 16 | base64 ``` ### Environment variables Read the Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) about setting environment variables. |  Name       |    | | ------------- | ------- | |**DB_CONNECTION_STRING**|Connection string to the database to use| |**MASTER_KEY**|The Master Key to connect to an existing database. If not supplied, and the database does not exist, it will generate a new one. The Master Key is mandatory if the database exists.| |**OCTOPUS_SERVER_BASE64_LICENSE**|Your license key for Octopus Deploy. If left empty, it will try to create a free license key for use |**ADMIN_USERNAME**|The admin user to create for the Octopus Server| |**ADMIN_PASSWORD**|The password for the admin user for the Octopus Server| |**ADMIN_EMAIL**|The email associated with the admin user account| |**TASK_CAP**|Sets the task cap for this node. If not specified, the default is 5.| |**DISABLE_DIND**|The Linux image will by default attempt to run Docker-in-Docker to support [execution containers for workers](/docs/projects/steps/execution-containers-for-workers). This requires the image to be launched with [privileged permissions](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities). Setting `DISABLE_DIND` to `Y` prevents Docker-in-Docker from being run when the container is booted.| ### Exposed container ports Read Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose) about exposing ports. |  Port       |    | | ------------- | ------- | |**8080**| Port for API and HTTP portal | |**443**| SSL Port for API and HTTP portal | |**10943**|Port for Polling Tentacles to contact the server| |**8443**|Port for gRPC clients to contact the server| ### Volume mounts Read the Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only) about mounting volumes. | Name     | Description | Mount source | | ------------- | ------- | ------- | |**/import**| Imports from this folder if [Octopus Migrator](/docs/octopus-rest-api/octopus.migrator.exe-command-line) metadata.json exists, then migrator `Import` takes place on startup | Host filesystem or container | |**/repository**| Package path for the built-in package repository | Shared storage | |**/artifacts**| Path where artifacts are stored | Shared storage | |**/taskLogs**| Path where task logs are stored | Shared storage | |**/eventExports**| Path where event audit logs are exported | Shared storage | |**/cache**| Path where cached files e.g., signature and delta files (used for package acquisition), are stored | Host filesystem or container | :::div{.hint} **Note:** We recommend using shared storage when mounting the volumes for files that must be shared between multiple octopus container nodes, e.g., artifacts, packages, task logs, and event exports. ::: ## Upgrading When the volumes are externally mounted to the host filesystem, upgrades between Octopus versions are much easier. We can picture the upgrade process with a container as being similar to [moving a standard Octopus Server](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-database-and-server) since containers, being immutable, don't themselves get updated. Similar to moving an instance, to perform the container upgrade, you will need the Master Key you used to set up the original database. You can find the Master Key for an Octopus Server in a container with the container exec command: ``` > docker container exec /Octopus/Octopus.Server show-master-key --console --instance OctopusServer 5qJcW9E6B99teMmrOzaYNA== ``` When you have the Master Key, you can stop the running Octopus Server container instance (delete it if you plan on using the same name) and run _almost_ the same command as before, but this time, pass in the Master Key as an environment variable and reference the new Octopus Server version. When this new container starts up, it will use the same credentials and detect that the database has already been set up and use the Master Key to access its sensitive values: ```bash $ docker run --interactive --detach --name OctopusServer --publish 1322:8080 --env DB_CONNECTION_STRING="..." --env MASTER_KEY "5qJcW9E6B99teMmrOzaYNA==" octopusdeploy/octopusdeploy ``` The standard backup and restore procedures for the [data stored on the filesystem](/docs/administration/data/backup-and-restore/#octopus-file-storage) and the connected [SQL Server](/docs/administration/data) still apply as per regular Octopus installations. ## Troubleshooting If you're having trouble with the Octopus Server Linux Container, please use our [troubleshooting guide](/docs/installation/octopus-server-linux-container/troubleshooting-octopus-server-in-a-container). ## Learn more  - [Docker blog posts](https://octopus.com/blog/tag/docker/1)  - [Linux blog posts](https://octopus.com/blog/tag/linux/1)  - [Introducing the Octopus Server Linux Docker image](https://octopus.com/blog/introducing-linux-docker-image)  - [Octopus Deploy on Docker Hub](https://hub.docker.com/r/octopusdeploy/octopusdeploy)  - [Octopus Tentacle on Docker Hub](https://hub.docker.com/r/octopusdeploy/tentacle/) # octopus account azure-oidc Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-azure-oidc.md Manage Azure OpenID Connect accounts in Octopus Deploy ```text Usage: octopus account azure-oidc [command] Available Commands: create Create an Azure OpenID Connect account help Help about any command list List Azure OpenID Connect accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account azure-oidc [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account azure-oidc list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Configuring Ping Identity Source: https://octopus.com/docs/security/authentication/oidc-authentication/configuring-ping.md Authentication using [Ping Identity](https://www.pingidentity.com/en.html), a cloud-based identity management service. To use Ping Identity authentication with Octopus you will need to: 1. Configure Ping to trust your Octopus Deploy instance (by setting it up as an app in Ping). 2. Configure your Octopus Deploy instance to trust and use Ping as an Identity Provider. ## Configure Ping Identity You need to configure Ping Identity to trust your instance of Octopus Deploy by creating an application in your Ping Identity account. :::div{.info} For more information, see the [Ping Identity documentation](https://docs.pingidentity.com/pingone/p1_cloud__platform_main_landing_page.html). ::: ### Create an application You must first have an account at [Ping Identity](https://www.pingidentity.com). Once you have an account, log in to the console. :::div{.hint} After signing up to Ping Identity you will receive your own url to access the console. It will look something similar to: `https://console.pingone.com.au`. ::: 1. Select the Applications tab on the left and click the + icon near "Applications" on the right. :::figure ![Add Ping Identity application](/docs/img/security/authentication/oidc-authentication/images/ping-add-application.png) ::: 2. Enter a name for the application (e.g. Octopus Deploy), a description, optionally an icon, and select `OIDC Web App`. 3. Click "Save". The application configuration panel will appear. On this page you have access to the application's **Client ID**, **Client Secret** and the **Issuer** - you will need these later. :::figure ![Ping Identity application configuration](/docs/img/security/authentication/oidc-authentication/images/ping-application-configuration.png) ::: 4. Click "Protocol OpenID Connect". The OIDC configuration settings appear. Ensure the following and click "Save": - Response Type: `Code` - Grant Type: `Authorization Code` - PKCE Enforcement: `S256_REQUIRED` - Redirect URIs: `https://your-octopus-url/api/users/authenticatedToken/GenericOidc` - Token Endpoint Authentication Method: `Client Secret Post` 5. Click the "Resources" tab followed by the edit icon. You will see a list of scopes that are allowed by the application. Allow `email` and `profile` by checking them on. Click "Save". 6. Toggle the switch in the top right of the panel to enable the application for use. :::figure ![Enable the Ping Identity application](/docs/img/security/authentication/oidc-authentication/images/ping-application-enabled.png) ::: ## Configure Octopus Server 1. Navigate to **Configuration ➜ Settings ➜ OpenID Connect** and populate the following fields: - **Enabled** should be set to `Yes`. - **Role Claim Type** should be left unset. - **Username Claim Type** should be set to `preferred_username`. - **Resource** should be left unset. - **Scopes** should be left as the default of `openid profile email`. - **Display Name** can be used to customize the appearance of the button on the Octopus Deploy login screen. Use a name that your users will recognize for this identity provider. - **Issuer**, **Client ID** and **Client Secret** should be the values you noted when creating the application. :::div{.hint} Note that the value of **Client Secret** cannot be retrieved once set - it can only be changed or deleted. ::: - **Allow Auto User Creation** determines if Octopus Deploy should automatically create user accounts, or only allow authentication for users that already exist in Octopus Deploy. 2. Click **Save** to apply the changes. 3. If you sign out of Octopus Deploy, you should now see a new button on the login screen to authenticate with the OIDC provider. ### Octopus user accounts are still required Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. :::div{.hint} **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a matching **Identifier**. If there is no match, Octopus looks for a user account with a matching **Email Address**. If a user account is found, the External Identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. ::: :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Troubleshooting If you are having difficulty configuring Octopus to authenticate with Ping Identity, check your [server logs](/docs/support/log-files) for warnings. ### Double- and triple-check your configuration Unfortunately security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember things are case-sensitive. - Remember to remove or add slash characters - they matter too! ### Check OpenID Connect metadata is working You can see the OpenID Connect metadata by going to the Issuer address in your browser adding `/.well-known/openid-configuration` to the end. In our example this would have been something like `https://auth.pingone.com.au/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/as/.well-known/openid-configuration` ### Contact Octopus Support If you aren't able to resolve the authentication problems yourself using these troubleshooting tips, please reach out to our [support team](https://octopus.com/support) with: 1. The contents of your OpenID Connect Metadata or the link to download it (see above). 2. A screenshot of the Octopus User Accounts, including their username, email address, and name. # Troubleshooting failed or hanging tasks Source: https://octopus.com/docs/support/troubleshooting-failed-or-hanging-tasks.md Sometimes your deployments, health checks, or other tasks may unexpectedly fail, or can even appear unresponsive. This page describes some common issues and strategies to help you overcome these issues. ## Check the logs The first step to debug your failed tasks is to check the [Task Log](/docs/support/get-the-raw-output-from-a-task). This will usually contain detailed information about the failure. For deployments, this includes the step, and information about the deployment targets that the step was running on. If a deployment failed unexpectedly within a built-in step, you may have misconfigured the step. Double-check the configuration of your [step](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process/). If your step is relying on [variables](/docs/projects/variables/), then you may have also misconfigured your variables. There are [some methods](/docs/support/debug-problems-with-octopus-variables) available that can help you debug your variables. If a task fails while executing a PowerShell script, you may be able to get more information by debugging the PowerShell script. You can easily [debug PowerShell scripts](/docs/deployments/custom-scripts/debugging-powershell-scripts) as they are executed by Tentacle. Manually running the failed script on the same target may often be a helpful step towards getting more useful error information, and helping to isolate the problem. Remember to run the script under the same user account as the Tentacle service. This user is often the **Local System** account, but this may have been changed so that [Tentacle runs under a specific user account](/docs/infrastructure/deployment-targets/tentacle/windows/running-tentacle-under-a-specific-user-account). If none of the above steps help, then you may have encountered a bug in a built-in step, in which case you can contact support for further assistance. ## Networking errors If your task logs contain errors that indicate a networking issue, there could be a few possible causes. ### Connections between Octopus Server and Tentacles The Octopus Server communicates with Tentacles in either Listening mode or Polling mode. Both modes require different configuration. A common problem is that traffic on the appropriate ports (10933 by default for Listening Tentacles) is not allowed by your firewall. If you are encountering problems with your connections, then your Task log might show messages that indicate a connection timing out, or a connection that was rejected by the remote host. A utility called [TentaclePing](https://github.com/OctopusDeploy/TentaclePing) can be used to test and diagnose connections between the machine hosting Octopus Server and the machines hosting Tentacles. This allows you to quickly test connections in isolation, without involving the complexity of Octopus Server, Tentacles, or tasks. See [Tentacle Communication Modes](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) for more information on configuring your Tentacles. #### Halibut The tool that manages connections between the Octopus Server and your Tentacles is called [Halibut](https://github.com/OctopusDeploy/Halibut/). In order to discover more detailed information about the connections, it may be useful to [increase the log level for Halibut](/docs/support/log-files/#Logfiles-Changingloglevelshalibut). The same change to increase the log level for Halibut can also be made on [Tentacle](/docs/support/log-files/#Logfiles-Changingloglevelstentacle). ### Connections to external services Steps that execute on deployment targets or workers may need to reach out to contact other external services. In these cases, a useful first step to help diagnose the problem is to attempt to manually perform the same connection, this can help to isolate the problem to a networking issue rather than a problem with Octopus Deploy. Remember these connections are usually initiated by your deployment targets or workers, and not by the Octopus Server. You may need to remotely connect to these remote machines, and then initiate a connection from those machines. This is distinct from attempting to establish the connection on the machine that hosts Octopus Server itself. ## Hanging tasks Sometimes tasks appear to be unresponsive or "hanging". In most cases, this ends up being antivirus or anti-malware software interfering with the task, and the first step in diagnosing the problem is to eliminate this source of interference, [see below](#anti-virus-software). If you can completely rule out antivirus software as a source of interference, then the problem may lie in your [custom scripts](/docs/deployments/custom-scripts). The next step to diagnosing these problems is to examine your logs and determine the exact location that the task became unresponsive. If this occurs within the logs output by a custom script, then the bug likely originates from your script. If you are still unable to determine the cause of your hanging tasks, please contact support for further assistance. ### Automatic failure of hanging tasks In some instances, Octopus will automatically trigger the failure of a task that has become unresponsive. When this occurs, the following will happen: - Your currently executing steps will be marked as failed with an error message saying that the "Operation has been unresponsive for {duration}" - Subsequent steps configured to always run will run - The overall task will be marked as failed This is generally indicative of an internal error in Octopus. In Octopus Cloud we actively monitor for these issues, but please reach out to support for further assistance, especially if the problem persists. ### Antivirus software {#anti-virus-software} If the task appears to hang after a log message output by the Octopus Server or Tentacle, then in most cases the cause is antivirus or anti-malware software interfering with the task. The first step is to determine if your antivirus software is actually affecting your Tasks, and this can easily be done by removing your antivirus protection and confirming whether the tasks continue to be unresponsive. #### "Bootstrapper did not return the bootstrapper service message" error If you see the error `Bootstrapper did not return the bootstrapper service message` in your task log, this typically indicates that antivirus or security software is interfering with the deployment. [Calamari](/docs/octopus-rest-api/calamari) is the lightweight bootstrapper that Tentacle invokes for each deployment step. It's installed and updated in the `Tools` folder and runs steps from the `Work` folder within your Tentacle home directory. When security software blocks or delays Calamari's execution, Tentacle can't receive the expected response, causing this error. To resolve this, configure your antivirus or endpoint protection software to exclude the Tentacle `Tools` and `Work` directories listed below. For detailed guidance, see [configuring malware protection exclusions](/docs/security/hardening-octopus#configure-malware-protection). If this test shows that antivirus is interfering with your tasks, you may need to configure your antivirus software with the appropriate exclusions to ensure that it does not lock any files owned by Octopus, or affect any running processes initiated by Octopus. Consult your antivirus provider's documentation for more information. Some examples of directories (and their subdirectories) you could try adding to an allow-list are: - `\Tools` - This is where the Calamari packages and other tools are installed so Tentacle can execute deployments on your behalf. - `\Work` - This is the temporary working directory used when Tentacle and Calamari execute deployments on your behalf. If you're still seeing issues you could also try including these additional directories (and their subdirectories): - `\Files` - This is the package cache used to store the most recent packages in case they need to be used again. - `\Logs` - This is where the Tentacle log files are stored. :::div{.hint} We recommend including subdirectories in any allow list for the directories listed above as processes initiated by Octopus may also create new folders within them. ::: ## Steps are slow to start If you notice that your PowerShell script or built-in steps take a while to begin execution, and the time is consistent across your steps, then you may have something in your Tentacle user's PowerShell profile which is causing PowerShell to take a long time to initialize. Add the `Octopus.Action.PowerShell.ExecuteWithoutProfile` variable to your deployment to help diagnose this problem. See [System Variables](/docs/projects/variables/system-variables/#user-modifiable-settings) for more information. # Merging repos Source: https://octopus.com/docs/best-practices/platform-engineering/merging-downstream.md When upstream and downstream projects are [configured with CaC and backed by forked repositories](/docs/platform-engineering/forking-git-repos) it becomes possible to merge changes from upstream to downstream repositories. The `Octopus - Merge CaC Updates` steps merges changes by: 1. Scanning the workspaces in the Terraform state created when deploying downstream projects 2. Finding any CaC enabled projects 3. Cloning the downstream Git repository 4. Adding the upstream repo as a remote repository 5. Merging changes from the upstream repo to the downstream repository Each `Octopus - Merge CaC Updates` step is configured with a specific Terraform backend. For example, the `Octopus - Merge CaC Updates (S3 Backend)` step is configured to read Terraform state persisted in an S3 bucket. The `Octopus - Merge CaC Updates` steps are typically defined in a runbook attached to the upstream project: 1. Create a runbook called `__ Merge CaC Updates` attached to the upstream project. 2. Add one of the `Octopus - Merge CaC Updates` steps. 1. Run the step on a worker with a recent version of Terraform installed or set the container image to a Docker image with Terraform installed like `octopuslabs/terraform-workertools`. 2. Set the `Octopus Spaces` field to a newline-separated list of downstream space names containing projects to update. Leave the field blank to process all downstream spaces. The default value of `#{Octopus.Deployment.Tenant.Name}` assumes the step is run against a tenant and the tenant name matches the space name. 3. Set the `Octopus Projects` field to a newline-separated list of downstream project names to process. Leave the field blank to process all downstream projects. 4. Set the `Git Username` field to the Git repository username. GitHub users with access tokens set this field to `x-access-token`. 5. Set the `Git Password` field to the Git repository password or access token. 6. Set the `Git Protocol` field to either `HTTP` or `HTTPS`. All publicly hosted Git platforms use `HTTPS`. 7. Set the `Git Hostname` field to the Git repository host name e.g. `github.com`, `gitlab.com`, `bitbucket.com`. 8. Set the `Git Organization` field to the Git repository owner or organization. 9. Set the `Git Template Repo` field to the Git repository hosting the upstream project. 10. Each `Octopus - Merge CaC Updates` step then defines additional fields related to the specific Terraform backend. For example, the `Octopus - Merge CaC Updates (S3 Backend)` step has fields for AWS credentials, region, bucket, and key. Executing the runbook will merge upstream changes into downstream repositories or print instructions on manually resolving merge conflicts in the verbose logs. # octopus account azure-oidc create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-azure-oidc-create.md Create an Azure OpenID Connect account in Octopus Deploy ```text Usage: octopus account azure-oidc create [flags] Aliases: create, new Flags: -d, -- string A summary explaining the use of the account to other users. -T, --accounttest-subject-keys stringArray The subject keys used for an account test --ad-endpoint-base-uri string Set this only if you need to override the default Active Directory Endpoint. --application-id string Your Azure Active Directory Application ID. --audience string The audience claim for the federated credentials. Defaults to api://AzureADTokenExchange --azure-environment string Set only if you are using an isolated Azure Environment. Configure isolated Azure Environment. Valid option are AzureChinaCloud, AzureChinaCloud, AzureGermanCloud or AzureUSGovernment -D, --description-file file Read the description from file -e, --environment stringArray The environments that are allowed to use this account -E, --execution-subject-keys stringArray The subject keys used for a deployment or runbook -H, --health-subject-keys stringArray The subject keys used for a health check -n, --name string A short, memorable, unique name for this account. --resource-management-base-uri string Set this only if you need to override the default Resource Management Endpoint. --subscription-id string Your Azure subscription ID. --tenant-id string Your Azure Active Directory Tenant ID. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account azure-oidc create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Troubleshooting Access Denied Starting Http Listener Source: https://octopus.com/docs/support/troubleshooting-access-denied-starting-http-listener.md Octopus requires certain permissions to launch the HTTP Listener - the web server that serves up the Octopus Portal. When the user that launches Octopus does not have these permissions, you will receive an error: ``` An Access Denied error was received trying to start the HttpListener. ``` ## Linux On Linux (and other *nix variants), elevated privileges are required to listen on ports lower than 1024. To resolve this issue, you have several options: 1. Reconfigure your Octopus Server to use a higher number port. Ports higher than 1024 are not considered privileged, so can be used by userland processes. You can combine this with a reverse proxy (such as Nginx, HAProxy or even `iptables`) to expose your desired port. 1. This [superuser.com article](https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443) has many suggestions, including 1. Use `CAP_NET_BIND_SERVICE` to grant low-numbered port access to a process 1. Use `authbind` to grant one-time access to allow access to a specific user 1. Finally, though not recommended, you can launch Octopus Server as the `root` user (for example, using `sudo`). ## Windows On Windows, users who are not part of the local Administrators group cannot listen on any port, unless a [URL reservation](https://docs.microsoft.com/en-us/windows-server/networking/technologies/netsh/netsh-http#add-urlacl) is made. This can be done via the following command: ``` netsh http add urlacl url= user= ``` For example: ``` netsh http add urlacl url=http://localhost:80/ user=DOMAIN\user ``` Running `netsh` requires administrative rights. While not recommended, you can also run your Octopus Server as a user who is part of the local Administrators group. # Backup and restore Source: https://octopus.com/docs/administration/data/backup-and-restore.md A successful disaster recovery plan for Octopus Deploy requires the ability to restore both: 1. The Octopus [SQL Server Database](/docs/administration/data). 2. The Octopus [data stored on the file system](/docs/administration/managing-infrastructure/server-configuration-and-file-storage). **Runbooks** [Octopus runbooks](/docs/runbooks) can help you automate your disaster recovery process. :::div{.problem} **Without your Master Key, backups are useless** Sensitive information is encrypted using AES with the Master Key as the encryption key. Without this Master Key you will lose your sensitive variables, passwords, and other encrypted data. Make sure you've taken a copy of the key! [Learn more about backing up the Master Key](/docs/security/data-encryption). Octopus Server 2024.4 and newer use AES-256 by default but support AES-128 for compatibility. Previous versions use AES-128. ::: ## Octopus SQL Database Most of the data and settings managed by Octopus, such as projects, environments, and deployments, are stored in a [SQL Server Database](/docs/administration/data). You are responsible for maintaining backups of the SQL Server Database. Refer to [SQL Server documentation](https://msdn.microsoft.com/en-AU/library/ms187510.aspx) for more information on backing up SQL Server. ### Which SQL Database recovery model should I choose? You should configure a SQL Database maintenance plan using a [recovery model](https://msdn.microsoft.com/en-us/library/ms189275.aspx) that suits your needs: 1. Use the `SIMPLE` recovery model if you're happy with daily/weekly restore points. 2. Use the `FULL` recovery model if you want point-in-time recovery. Learn more about [restoring and recovering SQL Server Databases](https://msdn.microsoft.com/en-us/library/ms191253.aspx). ## Octopus file storage In addition to the SQL Server Database, some Octopus data is stored on the file system. This includes: - Task logs that are generated whenever the server runs a job - Artifacts that have been collected during a deployment - Packages stored in the [Octopus built-in repository](/docs/packaging-applications/package-repositories) These files are stored in the Octopus home directory you configured when Octopus Server was installed (`C:\Octopus` by default). It is a good idea to **do regular backups of your Octopus home directory**. Learn about [Octopus file storage](/docs/administration/managing-infrastructure/server-configuration-and-file-storage). ## Encrypted data {#encrypted-data} Certain sensitive information in the [Octopus database is encrypted](/docs/security/data-encryption/). This information is encrypted using your Octopus Server "Master Key", a randomly generated string. You'll need this Master Key to restore the database to a new server. When connecting to an existing database, you will be prompted for this key during the setup process. If you have already set up the server, you can [change the Master Key](/docs/octopus-rest-api/octopus.server.exe-command-line/database) to work with the restored database. :::div{.problem} **Without your Master Key, backups are useless** Sensitive information is encrypted using AES-256 with the Master Key as the encryption key. Without this Master Key you will lose sensitive variables, passwords, and other encrypted data. Make sure you've taken a copy of the key! [Learn more about backing up the Master Key](/docs/security/data-encryption). ::: # High Availability Source: https://octopus.com/docs/administration/high-availability.md Octopus: High Availability (HA) enables you to run multiple Octopus Server nodes, distributing load and tasks between them. We designed it for enterprises that need to deploy around the clock and rely on the Octopus Server being available. :::figure ![High availability diagram](/docs/administration/high-availability/images/high-availability.svg) ::: An Octopus High Availability configuration requires four main components: - **A load balancer** This will direct user traffic bound for the Octopus web interface between the different Octopus Server nodes. - **Octopus Server nodes** These run the Octopus Server service. They serve user traffic and orchestrate deployments. - **A database** Most data used by the Octopus Server nodes is stored in this database. - **Shared storage** Some larger files - like [packages](/docs/packaging-applications/package-repositories), artifacts, and deployment task logs - aren't suitable to be stored in the database, and so must be stored in a shared folder available to all nodes. ## How High Availability Works High Availability (HA) distributes load between multiple nodes. There are two kinds of load an Octopus Server node encounters: 1. Tasks (Deployments, runbook runs, health checks, package re-indexing, system integrity checks, etc.) 2. User Interface via the Web UI and REST API (Users, build server integrations, deployment target registrations, etc.) Tasks are placed onto a first-in-first-out (FIFO) queue. By default, each Octopus Deploy node is configured to process five (5) tasks concurrently, which [can be updated in the UI](/docs/support/increase-the-octopus-server-task-cap). That is known as the task cap. Once the task cap is reached, the remaining tasks in the queue will wait until one of the other tasks is finished. Each Octopus Server node has a separate task cap. High Availability allows you to scale the task cap horizontally. If you have two (2) Octopus Server nodes each with a task cap of 10, you can process 20 concurrent tasks. Each node will pull items from the task queue and process them. Learn more about [how High Availability processes tasks in the queue](/docs/administration/high-availability/how-high-availability-works) section. ## High Availability Limits Octopus Deploy's High Availability functionality provides many benefits, but it has limits. 1. All Octopus Server nodes must run the same version of Octopus Deploy. Upgrading to a newer version of Octopus Deploy will require an outage as you upgrade all nodes. 1. You cannot specify the node a deployment or runbook run to execute on. Octopus Deploy uses a FIFO queue, nodes will pick up any pending tasks. 1. If a deployment or runbook run fails, it fails. Octopus Deploy will not automatically attempt to re-run that failed deployment or runbook run on a different node. In our experience, changing nodes rarely has been the solution to a failed deployment or runbook run. 1. All the Octopus Server nodes must connect to the same database. 1. Octopus Server nodes have no concept of a "read-only" connection to a database. All online nodes perform write operations to the database. Even if it is not processing tasks. 1. Octopus Server nodes are sensitive to latency to SQL Server and the file storage. The Octopus Server nodes, SQL Server, and file storage should all be located in the same data center or cloud region. The latency between availability zones within the same cloud region is acceptable. Latency between cloud regions or data centers is not. Generally, these limits are encountered when our users attempt to use Octopus Deploy's High Availability functionality for disaster recovery in a hot/hot configuration. A hot/hot configuration between two or more data centers or cloud regions is not supported nor recommended. Please see our white paper on recommendations for [high availability and disaster recovery](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). ## Licensing Each Octopus Deploy SQL Server database is a unique **Instance**. Nodes are the Octopus Server service that connects to the database. High Availability occurs when two or more nodes connect to the same Octopus Deploy database. An HA Cluster refers to all components, the load balancer, nodes, database, and shared storage. For self-hosted customers, High Availability is available for the following license types: - Professional: limited to 2 nodes - Enterprise: unlimited nodes The node limit is included in the license key in the NodeLimit node. ```xml Unlimited ``` If you do not have that node in your license key then you are limited to a single node. If you recently purchased a license key and it is missing that node then reach out to [sales@octopus.com](mailto:sales@octopus.com). ## Implementing High Availability Please see our [implementation guide](/docs/best-practices/self-hosted-octopus/high-availability) for step by step instructions on how to install and implement high availability. # Report on deployments using Excel and XML Source: https://octopus.com/docs/administration/reporting/report-on-deployments-using-excel.md Ever wonder how many deployments you did this month? We'll help you answer this question by walking you through how to export your deployments to Excel, and how to view them in a pivot table. At a high-level, the steps are: 1. Export all deployments to an XML file. 2. Import the XML file in Excel. 3. Report on the data using a pivot table. :::figure ![](/docs/img/administration/reporting/images/3278122.png) ::: ## Export all deployments using the XML feed Before we can report on the data using Excel, we need to export it in a format that Excel can import. The easiest way to do this is using an XML file. Octopus exposes data on deployments through the `/api/reporting/deployments/xml` endpoint. You can use our [Octopus API clients](/docs/octopus-rest-api/getting-started#api-clients) to download the XML file.
PowerShell ```powershell $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } Invoke-RestMethod -Method Get -Uri "$octopusURL/api/reporting/deployments/xml" -Headers $header -OutFile "deployments.xml" ```
The command will produce an XML file with contents similar to the following: ```xml Production Web App Deployment - Orchestration 2020-05-15T17:07:47 Deploy to Production Deployments-4992 Production All OctoPetShop Deployment - Orchestration 2020-05-15T17:07:23 Deploy to Production Deployments-4991 ..... ``` This file is now ready to be imported into Excel. ## Import the XML file in Excel Now that we have an XML file containing our deployments, we can import it into Microsoft Excel. In this example we are using Excel 2013. 1. Open Microsoft Excel, and create a new, blank workbook. 2. On the **Data** ribbon tab, click **From Other Sources**, then choose **From XML Data Import**. ![](/docs/img/administration/reporting/images/3278132.png) 3. Excel will prompt you that the XML file does not refer to a schema, and that one will be created. Click **OK**. 4. Excel will ask you where to create a table. Choose the location in your workbook to put the new table, or just click **OK**. 5. You should now have a table that lists each of the deployments you have performed with Octopus, along with the name of the environment, project and the date of the deployment. ![](/docs/img/administration/reporting/images/3278131.png) ## Report on the data using a pivot table It's easy to turn the table of deployments into a pivot table for reporting. 1. Select any cell in the table, then from the **Insert** ribbon tab, click **PivotTable**. ![](/docs/img/administration/reporting/images/3278130.png) 2. Excel will prompt you to ask where to place the new pivot table. Click **OK** to add it to a new worksheet in your workbook. 3. You can now build the pivot table by dragging fields into the **Rows** or **Columns** of the pivot table. For example, here's a breakdown of deployments by environment. Note that the **Id** field was dragged to the **Values** area, and **Environment** was dragged to **Rows**. :::figure ![](/docs/img/administration/reporting/images/3278129.png) ::: Here's another example, this time using **Environment** as a column, and **Project** as the rows: :::figure ![](/docs/img/administration/reporting/images/3278128.png) ::: You can also group the results by month or other measures of time. First, drag the **Created** field as as row. :::figure ![](/docs/img/administration/reporting/images/3278127.png) ::: Now, right-click any of the date values, and click **Group**. :::figure ![](/docs/img/administration/reporting/images/3278126.png) ::: Choose the level of granularity that you want to group by, then click **OK**. In this example we chose Months. :::figure ![](/docs/img/administration/reporting/images/3278125.png) ::: And the results will now be grouped by month: :::figure ![](/docs/img/administration/reporting/images/3278124.png) ::: If you aren't happy with the order that environments or other items are shown in, you can right-click and move them: :::figure ![](/docs/img/administration/reporting/images/3278123.png) ::: Finally, don't forget to add pretty graphs! :::figure ![](/docs/img/administration/reporting/images/3278122.png) ::: :::div{.hint} **Limitations** There are two major limits to this approach to be aware of: 1. As you have seen, only a small amount of data is available for use for reporting. 2. If you use [retention policies](/docs/administration/retention-policies), releases and deployments that have been deleted by the retention policy will also not be available for reporting. ::: ## Learn more - [Reporting blog posts](https://octopus.com/blog/tag/reporting/1). # Jira Service Management Integration Source: https://octopus.com/docs/approvals/jira-service-management.md :::div{.hint} The Jira Service Management (JSM) Integration is available from Octopus **2022.3** onwards and requires an [enterprise subscription](https://octopus.com/pricing). [Contact us](https://octopus.com/company/contact) to request access to this feature. ::: ## Overview The Octopus Deploy/JSM integration allows users to block the execution of specifically configured deployments unless they have a corresponding approved JSM **Change Request** (aka issue). To enable this behavior, both the Octopus Project and Environment you are deploying to must be configured and the JSM configuration is set up before deployments can be managed. ### Deployments | Project | Environment | Outcome | | --------------------------- | --------------------------- | -------------------------------- | | Change controlled | Change controlled | Approval required for deployment | | ***Not*** Change controlled | Change controlled | No approval required | | Change controlled | ***Not*** Change controlled | No approval required | ### Runbooks | Project | Environment | Runbook | Outcome | | --------------------------- | --------------------------- | ----------------- | -------------------- | | Change controlled | Change controlled | Enabled | Approval required | | Change controlled | Change controlled | ***Not*** Enabled | No approval required | | ***Not*** Change controlled | Change controlled | Enabled | No approval required | | Change controlled | ***Not*** Change controlled | Enabled | No approval required | ## Getting started The JSM integration requires Octopus **2022.3.12101** or later and an Octopus license with the JSM Integration feature enabled. Before you can use the Octopus Deploy/JSM integration, you'll need to: 1. Create a service account in JSM for use by Octopus 1. In Jira, create or use an existing project of the *IT service management* type. 1. Request and install a new Octopus license required to enable the JSM feature. 1. Configure a connection from Octopus to JSM. 1. Configure which deployments require an approved CR. ### Configuring Jira Service Management :::div{.hint} The instructions in this section will require a JSM Administrator. ::: The Octopus Deploy/JSM integration requires security configuration in your target JSM instance. The integration will require a user account in JSM. The recommendation is to create a service account specifically for Octopus. Take note of the password assigned or generated for this user. ### Licensing For the JSM approval checks to be performed as part of the deployment process, an appropriate Octopus license must be configured in your Octopus instance. A JSM enabled Octopus license must be requested from Octopus directly, and cannot be managed through the self-service process. To request a license register for the [JSM Early Access Program](https://octopusdeploy.typeform.com/jsm-eap) Once you have received your feature-enabled license, you can install it by navigating to **Configuration ➜ License**. An enabled license will include a block similar to below: ```xml ... ``` ### Configuring JSM connections :::div{.hint} The instructions in this section will require an Octopus Deploy Manager or Administrator ::: To connect your Octopus Deploy instance to JSM, navigate to **Configuration ➜ Settings ➜ Jira Service Management Integration**. Check the **Enabled** option ![JSM Integration Settings page](/docs/img/approvals/jira-service-management/images/jsm-connections-1.png) Click on **ADD CONNECTION** and fill out the details. The JSM Base Url should be the root URL and include the protocol e.g. `https://` :::figure ![JSM Integration Add Connection](/docs/img/approvals/jira-service-management/images/jsm-connections-2.png) ::: Press **TEST** to ensure that the connection details are working. Multiple JSM connections are supported, however, each project can only use one JSM connection. ### Configuring Issue Comments :::div{.hint} The instructions in this section will require an Octopus Deploy Manager or Administrator ::: If enabled, this feature will result in a linked change request having one or more Comments added during the deployment lifecycle which record details about the deployment and its execution status. To enable this feature navigate to **Configuration ➜ Settings ➜ Jira Service Management Integration**, click the **Customer Comments Enabled** checkbox show below then click **Save**. :::figure ![JSM Integration Enable Work Notes](/docs/img/approvals/jira-service-management/images/jsm-customer-comments-settings.png) ::: ## Configuring approvals ### Setting up deployments for CR approval To enforce a deployment to require an approved CR, the **Change Controlled** setting needs to be enabled in **both** the project and the environment it is being deployed to. To enable a project to enforce a requirement for an approved CR: 1. Navigate to the project and then **Settings ➜ ITSM Providers**. 2. Check the **Jira Service Management Integration ➜ Change Controlled** setting. 3. Select your JSM connection in the **Jira Service Management Connection** setting and click **SAVE**. :::figure ![JSM Integration Project settings](/docs/img/approvals/jira-service-management/images/jsm-project-settings.png) ::: ### Setting up runbooks for CR approval :::div{.warning} This feature is only available for version **2025.2.7878** and later ::: To enforce a runbook run to require an approved CR, the runbook needs to be included in the **Enabled Runbooks** setting and the **Change Controlled** setting also needs to be enabled in **both** the project and the environment the runbook is run in. To enable a runbook to enforce a requirement for an approved CR: 1. Navigate to the project and then **Settings ➜ ITSM Providers**. 2. Check the **Jira Service Management Integration ➜ Change Controlled** setting. 3. Select your JSM connection in the **Jira Service Management Connection** setting. 4. Select the runbooks you want to require an approved CR in the **Enabled Runbooks** setting, and then press **SAVE** :::figure ![JSM Integration Project settings](/docs/img/approvals/jira-service-management/images/jsm-runbooks-settings.png) ::: ### Default behavior Deployments and runbook runs resulting in a CR creation will produce an issue with a Request Type of **Request a change** ### Supplying the CR number to a deployment If you add a variable to your project named `Octopus.JiraServiceManagement.ChangeRequest.Number`, then an Issue will not be created, and instead, the supplied number will be used during the approval check. This variable can also be [scoped](/docs/projects/variables/getting-started/#scoping-variables) or configured as a [Prompted variable](/docs/projects/variables/prompted-variables). From **2025.2** on this can be set under the `Jira Service Management Issue settings` section on the deployment or runbook run creation page. Setting the Issue number at the deployment or runbook run level will override any predefined variable. ### Setting up environments for CR approval To enable an environment to enforce a requirement for an approved CR, navigate to **{ {Infrastructure,Environments}}**, edit the environment via the overflow menu and check the **Jira Service Management Integration ➜ Change Controlled** setting, and then press **SAVE**. :::figure ![JSM Integration Environment settings](/docs/img/approvals/jira-service-management/images/jsm-environment-settings.png) ::: ## How it works Deployments where both the project and environment have **Change Controlled** enabled, will query JSM for an approved Issue before execution can begin. When a **Change Controlled** deployment is evaluated for approval, the following checks are performed: - If a specific CR number is available, via a variable named `Octopus.JiraServiceManagement.ChangeRequest.Number`, then only this CR will be checked. - If there is an existing CR with the specifically formatted **Short Description** available. See [Title text matching](#title-text-matching) for more information, then this CR will be evaluated. - Create a new CR. - This will be a `Request a change" type Issue - An Issue created by Octopus will have a **Short Description** in the format outlined in [Title text matching](#title-text-matching). When re-deploying a previous deployment, the same Issue will be used if it is still open. If it is closed the above process will be followed again. Once an Issue has been found, the deployment will only proceed if the **State** of the CR is `Implementing`. If the **State** is either `Preview`, `Planning`, `Authorize`, or `Awaiting Implementation` the deployment will wait. Any other **State** will cause the deployment task to fail. For deployments using Highly Available (HA) Octopus, the logs will be written to the server logs instead of the task logs. :::div{.info} The only supported states are those defined in the default Issue lifecycle ::: The number of the Issue created or found will appear in the Task Summary tab of the executing Octopus deployment task. Clicking on the CR number in the message will navigate you to the CR in JSM. ![Deployment Task Summary awaiting JSM approval](/docs/img/approvals/jira-service-management/images/jsm-pending-issue-task-message.png) ### Title text matching Octopus supports matching a CR by setting the **Summary** of the CR to a well-known format: `Octopus: Deploy "{project name}" version {release version number} to "{environment name}"` e.g `Octopus: Deploy "Web Site" version 1.0.1-hotfix-001 to "Dev"` :::div{.hint} The title must match the format **exactly**, including the double-quotes. ::: ### Populating CR fields through Octopus :::div{.warning} This feature is only available for version 2025.4.9247 and later ::: To control the content of the CRs the variable `Octopus.JiraServiceManagement.Field[jsm_field]` can be set at the project level. These are contributed to the create CR body as a dictionary allowing any field to be set. For example, to set a custom `Summary` or `Due Date`: | Field | Variable | Example Value | | -------- | -------------------------------------------- | ------------------------------------------------------------ | | Summary | Octopus.JiraServiceManagement.Field[summary] | Custom Summary with #{SomeVariable} #{Octopus.Deployment.Id} | | Due Date | Octopus.JiraServiceManagement.Field[duedate] | 12-12-2025 | :::div{.hint} Setting a `Summary` will override the auto-generated Octopus summary. [Title text matching](#title-text-matching) means this will automatically progress the deployment unless the resolved summary is unique. This can be done by including variables like the deployment or environment Id. ::: Custom fields can also be set once [your custom field ID is obtained for each field](https://confluence.atlassian.com/jirakb/get-custom-field-ids-for-jira-and-jira-service-management-744522503.html). For example, to set a `Labels custom field`, `Number custom field`, or `Multi-select custom field`: | Field | Variable | Example Value | | ------------ | ----------------------------------------------------- | ---------------------------------------------------------------- | | Labels | Octopus.JiraServiceManagement.Field[customfield_1023] | ["yourUserDefinedLabel","anotherLabel"] | | Number | Octopus.JiraServiceManagement.Field[customfield_1024] | 235 | | Multi-select | Octopus.JiraServiceManagement.Field[customfield_1025] | [{"value":"yourFirstSelection"},{"value":"yourSecondSelection"}] | When adding `Multi-line text` custom fields, add line breaks using the plainText value editor dialog: ![line breaks created by hitting return when value editor is open](image.png) For a full list of available fields and values refer to the [JIRA docs](https://docs.atlassian.com/jira-servicedesk/REST/3.6.2/#fieldformats). ### Respecting change windows In addition to a change request being approved, a change must also be in its schedule change window in order for the deployment to execute. The change window is controlled by the `Planned start` and `Planned end` on the linked Issue. :::div{.info} The following list assumes the linked change is in an **approved** state. ::: - If no `Planned start` and `Planned end` are specified there is no change window and the deployment will execute. - If only a `Planned start` is set the deployment will execute on or after the defined date. - If only a `Planned end` is set the deployment will execute on or before the defined date. - If `Planned start` and `Planned end` are specified the deployment will execute on or between the defined dates. **If at any time a `Planned end` is exceeded and the linked change request is not approved, the deployment will be terminated.** ## Available Variables in a Deployment or Runbook :::div{.info} The following variables are only available in version 2025.4 and later ::: | Variable | Notes | | ---------------------------------------------------- | -------------------------------------------------------------- | | `Octopus.JiraServiceManagement.ChangeRequest.Number` | The number of the matched or created change request | | `Octopus.JiraServiceManagement.ChangeRequest.Id` | The system identifier of the matched or created change request | | `Octopus.JiraServiceManagement.Connection.Id` | | | `Octopus.JiraServiceManagement.Connection.Name` | | | `Octopus.JiraServiceManagement.Connection.BaseUrl` | | | `Octopus.JiraServiceManagement.Connection.Username` | | | `Octopus.JiraServiceManagement.Connection.Token` | | ## Known issues and limitations - Once an Issue is deemed to be related to a deployment, then only this Issue will be evaluated for the deployment to proceed. If the Issue is incorrect, you will need to cancel the deployment, close the CR and try the deployment again. - Each project only supports a single JSM connection. ## Older versions - Prior to version **2025.2.7878** runbooks did not support JSM approvals. # ServiceNow Integration Source: https://octopus.com/docs/approvals/servicenow.md :::div{.hint} The ServiceNow Integration feature is available from Octopus **2022.3** onwards and requires an [enterprise subscription](https://octopus.com/pricing). [Contact us](https://octopus.com/company/contact) to request a trial. ::: ## Overview The Octopus Deploy/ServiceNow integration allows you to block the execution of specifically configured deployments or runbooks unless they have a corresponding approved ServiceNow **Change Request** (CR). To enable this behavior, both the Octopus Project and Environment you are deploying to, or running an enabled runbook in, must be configured and the ServiceNow configuration is set up before the execution can be managed. ### Deployments | Project | Environment | Outcome | |--|--|--| | Change controlled| Change controlled| Approval required for deployment | | **_Not_** Change controlled| Change controlled| No approval required | | Change controlled| **_Not_** Change controlled| No approval required | ### Runbooks | Project | Environment | Runbook | Outcome | |--|--|--|--| | Change controlled | Change controlled | Enabled | Approval required | | Change controlled | Change controlled | **_Not_** Enabled | No approval required | | **_Not_** Change controlled | Change controlled | Enabled | No approval required | | Change controlled | **_Not_** Change controlled | Enabled | No approval required | ## Getting started The ServiceNow integration requires Octopus **2022.3** or later and an Octopus enterprise subscription. Your ServiceNow instance must have the following modules installed and activated: - Change Management - Change Management Standard Change Catalog - Change Management State Model These are typically available as part of the ServiceNow ITSM product Before you can use the Octopus Deploy/ServiceNow integration, you'll need to: 1. Configure ServiceNow OAuth credentials (for use by Octopus). 1. Request an enterprise license which is required to enable the ServiceNow feature. 1. Install the enterprise license (for Self-hosted customers only) 1. Configure a connection from Octopus to ServiceNow. 1. Configure which deployments require an approved CR. ### Configuring ServiceNow :::div{.hint} The instructions in this section will require a ServiceNow Administrator. ::: The Octopus Deploy / ServiceNow integration requires security configuration in your target ServiceNow instance. Follow the [ServiceNow OAuth documentation](https://docs.servicenow.com/bundle/sandiego-platform-security/page/administer/security/task/t_SettingUpOAuth.html) to configure an OAuth endpoint for Octopus to use for authentication. Take note of the OAuth client id and client secret from the configuration. Next, the integration will require a user account in ServiceNow. The recommendation is to create a service account specifically for Octopus, once created the user must be assigned the following two roles: - `sn_change_read` - `sn_change_write` Ensure that the new user has the `Web service access only` checkbox checked. Take note of the password assigned or generated for this user. ### Licensing For the ServiceNow approval checks to be performed as part of the deployment process, an [enterprise license](https://octopus.com/pricing) must be configured in your Octopus instance. This license must be requested from Octopus directly and cannot be managed through the self-service process. For Self-hosted customers, once you have received your enterprise license, you can install it by navigating to **Configuration ➜ License**. For Octopus Cloud customers, the license will be applied automatically for you. An enabled license will include a block similar to below: ```xml ... ``` ### Configuring ServiceNow connections :::div{.hint} The instructions in this section will require an Octopus Deploy Manager or Administrator ::: To connect your Octopus Deploy instance to ServiceNow, navigate to **Configuration ➜ Settings ➜ ServiceNow Integration**. Check the **Enabled** option ![ServiceNow Integration Settings page](/docs/img/approvals/servicenow/images/servicenow-connections-1.png) Click on **ADD CONNECTION** and fill out the details. The ServiceNow Base Url should be the root URL and include the protocol e.g. `https://` :::figure ![ServiceNow Integration Add Connection](/docs/img/approvals/servicenow/images/servicenow-connections-2.png) ::: Press **TEST** to ensure that the connection details are working. Multiple ServiceNow connections are supported, however, each project can only use one ServiceNow connection. ### Configuring Work Notes :::div{.warning} This feature is only available for version 2022.3.1274 and later ::: :::div{.hint} The instructions in this section will require an Octopus Deploy Manager or Administrator ::: If enabled, this feature will result in a linked change request having one or more Work Notes added during the deployment lifecycle which record details about the deployment and its execution status. To enable this feature navigate to **Configuration ➜ Settings ➜ ServiceNow Integration**, click the **Work Notes Enabled** checkbox show below then click **Save**. :::figure ![ServiceNow Integration Enable Work Notes](/docs/img/approvals/servicenow/images/servicenow-worknotes-settings.png) ::: ## Configuring approvals ### Setting up deployments for CR approval To enforce a deployment to require an approved CR, the **Change Controlled** setting needs to be enabled in **both** the project and the environment it is being deployed to. To enable a project to enforce a requirement for an approved CR: 1. Navigate to the project and then **Settings ➜ ITSM Providers**. 2. Check the **ServiceNow Integration ➜ Change Controlled** setting. 3. Select your ServiceNow connection in the **ServiceNow Connection** setting, and then press **SAVE**. :::figure ![ServiceNow Integration Project settings](/docs/img/approvals/servicenow/images/servicenow-cd-project-settings.png) ::: ### Setting up runbooks for CR approval :::div{.warning} This feature is only available for version **2025.2.7878** and later ::: To enforce a runbook run to require and approved CR, the **Change Controlled** settings needs to be enabled in **both** the project and the environment the runbooks is run in and additionally the runbook needs to be included in the **Enabled Runbooks** setting. To enable a runbook to enforce a requirement for an approved CR: 1. Navigate to the project and then **Settings ➜ ITSM Providers**. 2. Check the **ServiceNow Integration ➜ Change Controlled** setting. 3. Select your ServiceNow connection in the **ServiceNow Connection** setting. 4. Select the runbooks you want to require an approved CR in the **Enabled Runbooks** setting, and then press **SAVE** :::figure ![ServiceNow Integration Runbooks settings](/docs/img/approvals/servicenow/images/servicenow-cd-runbooks-settings.png) ::: ### Standard, Normal, and Emergency Changes By default, deployments and runbooks runs resulting in CR creation will produce a `Normal` change (i.e. one requiring explicit approval). Setting the **Standard Change Template Name** setting under **ITSM Providers** to the name of an active, approved **Standard Change Template** (as found in the Standard Change Catalog) will instead result in deployments and runbook runs of the project creating a `Standard` (i.e. low-risk, pre-approved) change. From **2024.2** you can create an `Emergency` change by selecting the Emergency Change setting on the deployment or runbook run creation page. :::figure ![ServiceNow Integration Project settings](/docs/img/approvals/servicenow/images/servicenow-emergency-change.png) ::: ### Supplying the CR number to a deployment If you add a variable to your project named `Octopus.ServiceNow.ChangeRequest.Number`, then a CR will not be created, and instead, the supplied CR number will be used during the approval check. This variable can also be [scoped](/docs/projects/variables/getting-started/#scoping-variables). From **2024.2** on this can be set under the `ServiceNow Change Request settings` section on the deployment or runbook run creation page. Setting the CR number at the deployment or runbook run level will override any predefined variable. ### Setting up environments for CR approval To enable an environment to enforce a requirement for an approved CR, navigate to **Infrastructure ➜ Environments**, edit the environment via the overflow menu and check the **Change Controlled** setting, and then press **SAVE**. :::figure ![ServiceNow Integration Environment settings](/docs/img/approvals/servicenow/images/servicenow-environment-settings.png) ::: ### Continuous Delivery (CD) audit record :::div{.warning} This feature is only available for version 2022.3.7086 and later ::: This feature allows a CD workflow using standard changes as audit records at the project level. When enabled a standard change will be created and moved to the `Implement` state, the deployment will execute and then the linked change will be moved to the `Review` or `Closed` state. CD audit record functionality is enabled under **ITSM Providers**. First set a valid **Change Template Name** then turn on the **Automatic Transition** selection to your desired completion state and click **Save** as per the following screenshot. :::figure ![ServiceNow CD Audit Record project settings](/docs/img/approvals/servicenow/images/servicenow-cd-project-settings.png) ::: ## How it works Deployments where both the project and environment have **Change Controlled** enabled, will query ServiceNow for an approved CR before execution can begin. When a **Change Controlled** deployment is evaluated for approval, the following checks are performed: - If a specific CR number is available, via a variable named `Octopus.ServiceNow.ChangeRequest.Number`, then only this CR will be checked. - If there is an existing CR with the specifically formatted **Short Description** available. See [Title text matching](#title-text-matching) for more information, then this CR will be evaluated. - Create a new CR. - This will be a `Normal` change, or a `Standard` change if the project has a `Change Template Name` set. - A CR created by Octopus will have a **Short Description** in the format outlined in [Title text matching](#title-text-matching) unless [over-ridden by a variable](#populating-cr-fields-through-octopus). When re-deploying a previous deployment, the same CR will be used if it is still open. If it is closed the above process will be followed again. Once a CR has been found, the deployment will only proceed if the **State** of the CR is `Implement`. If the **State** is either `New`, `Assess`, `Authorize`, or `Scheduled` the deployment will wait. Any other **State** will cause the deployment task to fail. :::div{.info} The only supported states are those defined in the default CR lifecycle ::: If the deployment is scheduled to execute in the future, then a CR will be created at the scheduled deployment time, and not when the deployment was requested. The number of the CR created or found will appear in the Task Summary tab of the executing Octopus deployment task. Clicking on the CR number in the message will navigate you to the CR in ServiceNow. :::figure ![Deployment Task Summary awaiting ServiceNow approval](/docs/img/approvals/servicenow/images/servicenow-pending-cr-task-message.png) ::: ### Title text matching Octopus supports matching a CR by setting the **Short Description** of the CR to a well-known format: `Octopus: Deploy "{project name}" version {release version number} to "{environment name}"` e.g `Octopus: Deploy "Web Site" version 1.0.1-hotfix-001 to "Dev"` :::div{.hint} The title must match the format **exactly**, including the double-quotes. ::: ### Populating CR fields through Octopus :::div{.warning} This feature is only available for version 2024.2.6455 and later ::: To control the content of the CRs the variable `Octopus.ServiceNow.Field[snow_field]` can be set at the project level. These are contributed to the create CR body as a dictionary allowing any field to be set. For example to set the `Assigned To` or `Short Description` fields you can use the following: | Field | Variable | Example Value| |--|--|--| |Assigned To|Octopus.ServiceNow.Field[assigned_to]|beth.anglin| |Short Description|Octopus.ServiceNow.Field[short_description]|Custom Short Description with #{SomeVariable} #{Octopus.Deployment.Id}| :::div{.hint} Setting a `Short Description` will over-ride the auto generated Octopus description. [Title text matching](#title-text-matching) means this will automatically progress the deployment unless the resolved description is unique. This can be done by including variables like the deployment or environment Id. ::: :::div{.hint} The expected ServiceNow value doesn't always align with the displayed value. In the case of `Assigned To` the value displayed is `Beth Anglin` but the expected value is the `User ID` in this case `beth.anglin`. ::: For a full list of available fields and values refer to the [ServiceNow docs](https://developer.servicenow.com/dev.do#!/reference/api/utah/rest/change-management-api). ### Respecting change windows :::div{.warning} This feature is only available for version 2022.3.3026 and later ::: In addition to a change request being approved, a change must also be in its schedule change window in order for the deployment to execute. The change window is controlled by the `Planned start date` and `Planned end date` on the linked change request. :::div{.info} The following list assumes the linked change is in an **approved** state. ::: - If no `Planned start date` and `Planned end date` are specified there is no change window and the deployment will execute. - If only a `Planned start date` is set the deployment will execute on or after the defined date. - If only a `Planned end date` is set the deployment will execute on or before the defined date. - If `Planned start date` and `Planned end date` are specified the deployment will execute on or between the defined dates. **If at any time a `Planned end date` is exceeded and the linked change request is not approved, the deployment will be terminated.** ## Available Variables in a Deployment or Runbook :::div{.info} The following variables are only available in version 2025.4 and later ::: | Variable | Notes | |--|--| | `Octopus.ServiceNow.ChangeRequest.Number` | The number of the matched or created change request | | `Octopus.ServiceNow.ChangeRequest.SysId` | The system identifier of the matched or created change request | | `Octopus.ServiceNow.Connection.Id` | | | `Octopus.ServiceNow.Connection.Name` | | | `Octopus.ServiceNow.Connection.BaseUrl` | | | `Octopus.ServiceNow.Connection.OAuthClientId` | | | `Octopus.ServiceNow.Connection.OAuthClientSecret` | | | `Octopus.ServiceNow.Connection.Username` | | | `Octopus.ServiceNow.Connection.Password` | | ## Known Issues and limitations - Once a CR is deemed to be related to a deployment, then only this CR will be evaluated for the deployment to proceed. If the CR is incorrect, you will need to cancel the deployment, close the CR and try the deployment again. - Each project only supports a single ServiceNow connection. - Each project only supports supplying the same **Change Template Name** across all environments in the [Lifecycle](/docs/releases/lifecycles) attached to the project or channel. ## Troubleshooting Errors occurring during a deployment approval checks will appear in the "Task Failed" icon's tooltip. From **2024.2** on errors related to creating a change request are available through the task log. Additional information will also be available in the "System Diagnostic Report". If you are seeing errors in Octopus during deployments, ensure that the ServiceNow user account is authorized to call the required endpoints. The ServiceNow integration uses the following REST endpoints: | Purpose | HTTP Method | Path | Notes | |--------------------------------------|-------------|-------------------------------------------------|-------| | Authorize | `POST` | `/oauth_token.do` | | | Search for changes | `GET` | `/api/sn_chg_rest/change` | | | Create change | `POST` | `/api/sn_chg_rest/change/normal` | | | Search for Standard Change templates | `GET` | `/api/sn_chg_rest/change/standard/template` | Requires project **Change Template Name** configuration | | Create Standard Change from template | `POST` | `/api/sn_chg_rest/change/standard/{templateId}` | Requires project **Change Template Name** configuration | |Approve Standard Change | `PATCH` | `/api/sn_chg_rest/change/{changeId}` | Requires project **Automatic Transition** configuration | |Add work notes | `PATCH` | `/api/sn_chg_rest/change/{changeId}` | Requires **Work Notes Enabled** **ServiceNow** global configuration | ## Older versions - Prior to version **2025.2.7878** ServiceNow approvals for runbooks were not supported. # Argo CD Authentication Source: https://octopus.com/docs/argo-cd/instances/argo-user.md Octopus Deploy fetches application, cluster and log data from your Argo CD Instance. This data is used in the Octopus UI to provide a rich integration, and also during step execution to determine which applications are to be updated. To request this data, Octopus must authenticate with Argo CD as a user with appropriate permissions. While a new token could be generated for an existing user, it is recommended that a new user be created within Argo CD to represent the Octopus interactions. To do this, the following must be performed: 1. Create a new user in Argo CD 2. Create RBAC policies to allow the new user read-access to required resources 3. Generate an authentication token for the new user ## Create a new User To create a new user in Argo, you must update the `argocd-cm` configmap (typically in the argo-cd namespace). The following shows a configmap with a new user called `octopus` which is able to generate an apiKey, but cannot log in via Argo's web-ui. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: # add an additional local user with apiKey and login capabilities # apiKey - allows generating API keys accounts.octopus: apiKey accounts.octopus.enabled: "true" ``` For more information see [Argo User docs](https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/). The newly created account will appear in the Argo CD web UI under **Settings ➜ Accounts**. Alternatively, from the command line, the Argo CD CLI can be executed to confirm the user creation was successful: ```bash argocd account list ``` Ensure the terminal output includes the `octopus` user, with an apiKey capability. ## Add Required Permissions With the user created, an RBAC policy must be created allowing the new user to access required data. The RBAC policies are stored within the `argocd-rbac-cm` configmap. The following shows an Octopus user which has read only access to all applications, cluster and log data, and sync permissions for applications. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd data: policy.csv: | p, octopus, applications, get, *, allow p, octopus, applications, sync, *, allow p, octopus, clusters, get, *, allow p, octopus, logs, get, */*, allow ``` If the permissions are not correctly set, Octopus will be able to connect to Argo, but will report an empty Application list for the connected Argo CD instance (as Octopus had insufficient permissions to read the list). For more information see [Argo RBAC docs](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/). ## Generate Authentication Token There are two methods for creating a new authentication token in Argo CD: 1. Via the web UI under **Settings ➜ Accounts ➜ octopus** 2. Via the `Argo CD CLI` tool. To generate the authentication token for Octopus via the `Argo CD CLI` tool: 1. Login as a user with permission to create API Keys: ```bash argocd login ``` You will be prompted for a username and password - select a user with the apiKey-creation capability. 2. Create the API token for the Octopus user by executing: ```bash argocd account generate-token --account octopus ``` The authentication token will be echoed to the terminal, and must be copied into the Gateway's installation mechanism (either the Octopus UI, or helm installation). For more information see [Argo CD CLI docs](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_account_generate-token/). ## Verify Permissions To ensure the octopus user has the correct permissions, the following Argo CD CLI commands can be executed: ```bash argocd account can-i --auth-token get clusters '*' ``` ```bash argocd account can-i --auth-token get applications '*' ``` ```bash argocd account can-i --auth-token get logs '*/*' ``` These commands should all respond `yes`. To confirm the account's access is limited, execute: ```bash argocd account can-i --auth-token delete applications '*' ``` Which should respond `no`. # Automated Installation Source: https://octopus.com/docs/argo-cd/instances/automated-installation.md With the Octopus Argo CD Gateway being published as a helm chart, several options exist to install it through automated means: - Scripted install using the Helm CLI - Terraform - Argo CD Application Full documentation for all available Helm values is available on [GitHub](https://github.com/OctopusDeploy/octopus-argocd-gateway-chart-docs). These examples, and the Helm command provided in the Octopus Server portal, describe the minimum configuration required to install an Argo Gateway. ## Scripted helm The Octopus Server portal offers a process to aid in the creation of the required helm command to install the Gateway chart. However, it can also be scripted using a command similar to the following: ```bash helm upgrade --install --atomic \ --create-namespace --namespace octo-argo-gateway-release-name \ --version "*.*" \ --set registration.octopus.name="" \ --set registration.octopus.serverApiUrl="https://your-instance.octopus.app" \ --set registration.octopus.serverAccessToken="API-XXXXXXXXXXXXXXXX" \ --set registration.octopus.environments="{dev,staging,production}" \ --set registration.octopus.spaceId="Spaces-1" \ --set gateway.octopus.serverGrpcUrl="grpc://your-instance.octopus.app:8443" \ --set gateway.argocd.serverGrpcUrl="grpc://argocd-server.argocd.svc.cluster.local" \ --set gateway.argocd.authenticationToken="" \ octo-argo-gateway-release-name \ oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart ``` ## Terraform The Gateway helm chart can be installed via Terraform. For a minimal install the following is required. Update the version line to the most recent tag found on [dockerhub](https://hub.docker.com/r/octopusdeploy/octopus-argocd-gateway-chart) ```hcl locals { gatewayName = "" octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://your-instance.octopus.app" octopus_grpc_address = "https://your-instance.octopus.app:8443" argo_auth_token = "" } resource "helm_release" "argo_gateway" { name = "octopus-argo-gateway" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/octopus-argocd-gateway-chart" version = "1.*.*" atomic = true create_namespace = true namespace = "octopus-argo-gateway-your-namespace" timeout = 60 set = [ { name = "registration.octopus.name", value = local.gatewayName }, { name = "registration.octopus.serverApiUrl" value = local.octopus_address }, { name = "registration.octopus.serverAccessToken" value = local.octopus_api_key }, { name = "registration.octopus.spaceId" value = "Spaces-1" }, { name = "gateway.octopus.serverGrpcUrl" value = local.octopus_grpc_address }, { name = "gateway.argocd.serverGrpcUrl" value = "grpc://argocd-server.argocd.svc.cluster.local" }, { name = "gateway.argocd.insecure" value = "true" }, { name = "gateway.argocd.plaintext" value = "false" }, { name = "gateway.argocd.authenticationToken" value = local.argo_auth_token } ] set_list = [{ name = "registration.octopus.environments" value = [octopusdeploy_environment.dev_env.name, octopusdeploy_environment.prod_env.id] }] } ``` ## Installing as an Argo CD Application The Octopus-Argo Gateway's helm chart can be installed via an Argo CD Application. The application YAML required to install the helm chart is as follows (replacing values as per previous examples): 1. Create the namespace ```shell kubectl create ns octopus-argo-gateway-your-namespace ``` 2. Generate Argo CD Authentication Token 2.1. Follow the instructions on the [Argo CD Authentication](/docs/argo-cd/instances/argo-user) guide 2.2. Save the token in a secret ```shell kubectl create secret generic argocd-auth-token -n octopus-argo-gateway-your-namespace --from-literal=ARGOCD_AUTH_TOKEN= ``` 3. Generate Octopus Deploy Api-Key 3.1. Follow the instreuctions on the [How to Create an API Key](/docs/octopus-rest-api/how-to-create-an-api-key) guide 3.2. Save the token in a secret ```shell kubectl create secret generic octopus-server-access-token -n octopus-argo-gateway-your-namespace --from-literal=OCTOPUS_SERVER_ACCESS_TOKEN= ``` 4. Apply the Argo CD application (or commit this manifest to your git-ops repository already synced by Argo CD) ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: finalizers: - resources-finalizer.argocd.argoproj.io name: octopus-argo-gateway spec: project: default source: repoURL: registry-1.docker.io/octopusdeploy chart: octopus-argocd-gateway-chart targetRevision: 1.23.0 helm: valuesObject: registration: octopus: name: serverApiUrl: https://your-instance.octopus.app serverAccessTokenSecretName: octopus-server-access-token serverAccessTokenSecretKey: OCTOPUS_SERVER_ACCESS_TOKEN spaceId: Spaces-1 gateway: octopus: serverGrpcUrl: grpc://your-instance.octopus.app:8443 argocd: serverGrpcUrl: grpc://argocd-server.argocd.svc.cluster.local authenticationTokenSecretName: argocd-auth-token authenticationTokenSecretKey: ARGOCD_AUTH_TOKEN autoUpdate: # should be disabled, otherwise the auto-update job will keep trying to update the instance, while argo cd syncs it back to original state enabled: false destination: server: https://kubernetes.default.svc namespace: octopus-argo-gateway-your-namespace ``` # AWS Managed Argo CD Source: https://octopus.com/docs/argo-cd/instances/aws-managed-argo-cd.md The Argo CD Gateway can be installed into an AWS EKS cluster and connect to an Argo CD instance managed by the Argo CD Capability. ## Differences from a standard Argo CD instance AWS managed Argo CD instances differ from standard self-hosted installations in the following ways: ### External URL Standard installations connect to Argo CD using the in-cluster Kubernetes service DNS name (e.g. `argocd-server.argocd.svc.cluster.local`). AWS managed Argo CD instances are not accessible via in-cluster DNS, so the publicly accessible EKS capabilities URL must be used instead. ### Valid TLS certificate AWS managed Argo CD instances are served with a publicly trusted TLS certificate. Unlike self-hosted installations that may use self-signed certificates, the **Argo CD instance uses self-signed certificates** option should remain unchecked to keep certificate verification enabled. ### gRPC-Web AWS EKS Argo CD instances are exposed through a load balancer that does not support native gRPC (HTTP/2). The gateway must be configured to use gRPC-Web, which encapsulates gRPC communication over HTTP/1.1, by setting `gateway.argocd.grpcWeb="true"` or `gateway.argocd.grpcWebRootPath="/argo/api"`. ## Installation The installation process follows the [standard process](/docs/argo-cd/instances#installing-the-octopus-argo-cd-gateway), with a few adjustments required for AWS managed Argo CD instances. 1. Replace the default value for the Argo CD service DNS name with the publicly accessible URL for the Argo CD instance, without the protocol prefix. For example: `xxxxxxxx.eks-capabilities.ap-southeast-2.amazonaws.com` 2. Uncheck the **Argo CD instance uses self-signed certificates** option 3. Copy the generated Helm command and append the following value: `--set gateway.argocd.grpcWeb="true"`, if your Argo CD instance's API is not hosted at the root path you can set the following value instead: `--set gateway.argocd.grpcWebRootPath="/argo/api"` The resulting Helm command will look similar to the following: ```bash helm install --atomic \ --create-namespace --namespace octo-argo-gateway- \ --version "*.*" \ --set registration.octopus.name="" \ --set registration.octopus.serverApiUrl="https://your-instance.octopus.app/" \ --set registration.octopus.serverAccessToken="API-XXXXXXXXXXXXXXXX" \ --set registration.octopus.spaceId="Spaces-1" \ --set gateway.octopus.serverGrpcUrl="grpc://your-instance.octopus.app:8443" \ --set gateway.argocd.serverGrpcUrl="grpc://xxxxxxxx.eks-capabilities..amazonaws.com" \ --set gateway.argocd.insecure="false" \ --set gateway.argocd.plaintext="false" \ --set gateway.argocd.authenticationToken="" \ --set gateway.argocd.grpcWeb="true" \ \ oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart ``` # Terraform Bootstrap Source: https://octopus.com/docs/argo-cd/instances/terraform-bootstrap.md When provisioning a new cluster, it is possible to install Argo CD while provisioning the required token secrets for the upcoming Argo CD Gateway installation. Once Argo CD is installed, the Argo CD Gateway can be installed using an Argo CD Application as described in [Automated Installation](/docs/argo-cd/instances/automated-installation). Another approach would be to install the Argo CD Gateway as part of the terraform chart, as described under the [Note](#gateway). Here is a simplified example to make this happen: | File | Purpose | | - | - | | [providers.tf](#providers) | Terraform + kubernetes, helm, null, time providers | | [variables.tf](#variables) | All inputs — kubeconfig, Argo CD URLs, Octopus credentials, gateway config | | [argocd.tf](#argo-cd) | Installs Argo CD via Helm; enables apiKey,login on the admin account | | [argocd-token.tf](#argo-cd-token) | Generates the Argo CD API key via the CLI and stores it in a k8s secret | | [gateway.tf](#gateway) | Creates Octopus API key secret; optionally installs the gateway Helm chart | | [outputs.tf](#outputs) | Useful one-liners and resource references | | [terraform.tfvars.example](#terraform-tfvars) | Copy → terraform.tfvars and fill in | ## Providers ```yaml # providers.tf terraform { required_version = ">= 1.5.0" required_providers { kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.27" } helm = { source = "hashicorp/helm" version = "~> 2.13" } null = { source = "hashicorp/null" version = "~> 3.2" } time = { source = "hashicorp/time" version = "~> 0.11" } } } provider "kubernetes" { config_path = var.kubeconfig_path config_context = var.kube_context } provider "helm" { kubernetes { config_path = var.kubeconfig_path config_context = var.kube_context } } ``` ## Variables ```yaml # variables.tf # ─── Kubernetes ─────────────────────────────────────────────────────────────── variable "kubeconfig_path" { description = "Path to the kubeconfig file." type = string default = "~/.kube/config" } variable "kube_context" { description = "Kubernetes context to use. Defaults to the current context." type = string default = null } # ─── Argo CD ────────────────────────────────────────────────────────────────── variable "argocd_namespace" { description = "Namespace to install Argo CD into." type = string default = "argocd" } variable "argocd_chart_version" { description = "Argo CD Helm chart version (from https://argoproj.github.io/argo-helm)." type = string default = "9.4.6" } variable "argocd_web_ui_url" { description = "Argo CD Web UI URL used for gateway registration (e.g. https://argocd.example.com)." type = string } variable "argocd_insecure" { description = "Skip TLS verification on the gRPC connection from the gateway to Argo CD." type = bool default = false } # ─── Octopus Deploy ─────────────────────────────────────────────────────────── variable "octopus_api_url" { description = "Octopus Deploy HTTP API URL used for registration (e.g. https://my-instance.octopus.app)." type = string } variable "octopus_grpc_url" { description = "Octopus Deploy gRPC URL including port (e.g. my-instance.octopus.app:443)." type = string } variable "octopus_api_key" { description = "Octopus Deploy API key used to register the gateway." type = string sensitive = true } variable "octopus_space_id" { description = "Octopus Deploy Space ID the gateway registers into." type = string default = "Spaces-1" } variable "octopus_environments" { description = "List of Octopus Deploy environment slugs or IDs to associate with the gateway." type = list(string) default = [] } variable "octopus_grpc_plaintext" { description = "Disable TLS on the Octopus gRPC connection. Only for development/local setups." type = bool default = false } # ─── Gateway ────────────────────────────────────────────────────────────────── variable "gateway_namespace" { description = "Namespace to install the Octopus Argo CD Gateway into." type = string default = "octopus-argocd-gateway" } variable "gateway_name" { description = "Name of the Argo CD Gateway" type = string } variable "gateway_chart_version" { description = "Helm chart version for the Argo CD Gateway" type = string } ``` ## Argo CD ```yaml # argocd.tf locals { # Derived from the Helm release name and namespace — no user input required. # The argo-cd chart names its server service as "-server". argocd_grpc_url = "${helm_release.argocd.name}-server.${var.argocd_namespace}.svc.cluster.local:443" } resource "kubernetes_namespace" "argocd" { metadata { name = var.argocd_namespace } } # Install Argo CD via the official Helm chart. # Creates a dedicated "octopus" service account with apiKey capability and the # permissions required by Octopus Deploy (applications, clusters, logs). # Admin retains login-only access so the bootstrap script can generate the octopus token. resource "helm_release" "argocd" { name = "argocd" repository = null chart = "oci://ghcr.io/argoproj/argo-helm/argo-cd" version = var.argocd_chart_version namespace = kubernetes_namespace.argocd.metadata[0].name values = [ yamlencode({ configs = { cm = { # Dedicated service account for Octopus Deploy — API key only, no interactive login. "accounts.octopus" = "apiKey" } rbac = { "policy.default" = "role:readonly" "policy.csv" = <<-EOT g, admin, role:admin p, octopus, applications, get, *, allow p, octopus, applications, sync, *, allow p, octopus, clusters, get, *, allow p, octopus, logs, get, */*, allow EOT } } }) ] # Wait until all Argo CD pods are healthy before continuing. timeout = 600 wait = true } # Give the Argo CD server a moment to fully initialise its API # (the rollout-status check alone isn't always sufficient). resource "time_sleep" "wait_for_argocd" { depends_on = [helm_release.argocd] create_duration = "30s" } ``` ## Argo CD Token ```yaml # argocd-token.tf locals { # Name of the Kubernetes secret that will hold the generated Argo CD token. # The secret is created in the gateway namespace so the gateway pod can mount it. argocd_token_secret_name = "argocd-gateway-token" } # Use a null_resource + local-exec to: # 1. Wait for the Argo CD server deployment to be fully ready. # 2. Port-forward the Argo CD server locally. # 3. Log in with the argocd CLI using the auto-generated admin password. # 4. Generate an API key for the octopus account. # 5. Store that key in a Kubernetes secret in the gateway namespace. # # Prerequisites (must be available on the machine running `terraform apply`): # - kubectl (configured to reach the target cluster) # - argocd (https://argo-cd.readthedocs.io/en/stable/cli_installation/) # - nc / netcat resource "null_resource" "argocd_token" { depends_on = [ time_sleep.wait_for_argocd, kubernetes_namespace.gateway, ] # Re-run whenever Argo CD is reinstalled or the gateway namespace changes. triggers = { argocd_release_id = helm_release.argocd.id gateway_namespace = var.gateway_namespace } provisioner "local-exec" { interpreter = ["bash", "-c"] command = <<-EOT set -euo pipefail echo ">>> Waiting for argocd-server deployment to be ready..." kubectl rollout status deployment/argocd-server \ --namespace "${var.argocd_namespace}" \ --timeout=300s echo ">>> Fetching initial admin password..." ARGOCD_PASSWORD=$(kubectl get secret argocd-initial-admin-secret \ --namespace "${var.argocd_namespace}" \ -o jsonpath='{.data.password}' | base64 --decode) echo ">>> Starting port-forward on localhost:18080 -> argocd-server:443..." # Use port 18080 to avoid conflicts with any local service on 8080. kubectl port-forward svc/argocd-server \ --namespace "${var.argocd_namespace}" \ 18080:443 & PF_PID=$! trap 'echo ">>> Cleaning up port-forward (PID $PF_PID)"; kill "$PF_PID" 2>/dev/null || true' EXIT echo ">>> Waiting for port-forward to become available..." for i in $(seq 1 20); do if nc -z localhost 18080 2>/dev/null; then echo " Ready after $i attempt(s)." break fi echo " Attempt $i/20 — retrying in 3s..." sleep 3 done echo ">>> Logging in to Argo CD..." argocd login localhost:18080 \ --username admin \ --password "$ARGOCD_PASSWORD" \ --insecure \ --grpc-web echo ">>> Generating API token for the octopus account..." ARGOCD_TOKEN=$(argocd account generate-token \ --account octopus \ --insecure \ --grpc-web) echo ">>> Storing token in Kubernetes secret '${local.argocd_token_secret_name}' (namespace: ${var.gateway_namespace})..." kubectl create secret generic "${local.argocd_token_secret_name}" \ --namespace "${var.gateway_namespace}" \ --from-literal=ARGOCD_AUTH_TOKEN="$ARGOCD_TOKEN" \ --dry-run=client -o yaml | kubectl apply -f - echo ">>> Done. Argo CD API token is ready." EOT } } ``` ## Gateway ```yaml # gateway.tf resource "kubernetes_namespace" "gateway" { metadata { name = var.gateway_namespace } } # Store the Octopus API key as a Kubernetes secret so it is never passed # as a plain-text Helm value. The chart reads it via serverAccessTokenSecretName. resource "kubernetes_secret" "octopus_api_key" { metadata { name = "octopus-server-access-token" namespace = kubernetes_namespace.gateway.metadata[0].name } data = { OCTOPUS_SERVER_ACCESS_TOKEN = var.octopus_api_key } type = "Opaque" } ``` :::div{.hint} **Note** In order to deploy the Argo CD Gateway using helm directly, you can re-use the helm provider: ```yaml # gateway.tf resource "kubernetes_namespace" "gateway" { metadata { name = var.gateway_namespace } } # Store the Octopus API key as a Kubernetes secret so it is never passed # as a plain-text Helm value. The chart reads it via serverAccessTokenSecretName. resource "kubernetes_secret" "octopus_api_key" { metadata { name = "octopus-server-access-token" namespace = kubernetes_namespace.gateway.metadata[0].name } data = { OCTOPUS_SERVER_ACCESS_TOKEN = var.octopus_api_key } type = "Opaque" } # Install the Octopus Argo CD Gateway. # The chart is referenced from the published GitHub Pages Helm repository. # Both the Argo CD token and the Octopus API key are supplied via existing # Kubernetes secrets rather than inline values to avoid storing credentials # in Terraform state or Helm release history. resource "helm_release" "gateway" { name = "octopus-argocd-gateway" repository = null chart = "oci://registry-1.docker.io/octopusdeploy/octopus-argocd-gateway-chart" version = var.gateway_chart_version namespace = kubernetes_namespace.gateway.metadata[0].name depends_on = [ # The Argo CD token secret must exist before the gateway pod starts. null_resource.argocd_token, kubernetes_secret.octopus_api_key, ] values = [ yamlencode({ gateway = { argocd = { # gRPC URL derived automatically from the Argo CD Helm release. serverGrpcUrl = local.argocd_grpc_url # Skip TLS verification if Argo CD is using a self-signed cert. insecure = var.argocd_insecure # Reference the secret created by null_resource.argocd_token. # The chart looks for the key ARGOCD_AUTH_TOKEN inside this secret. authenticationTokenSecretName = local.argocd_token_secret_name authenticationTokenSecretKey = "ARGOCD_AUTH_TOKEN" } octopus = { serverGrpcUrl = var.octopus_grpc_url plaintext = var.octopus_grpc_plaintext } } registration = { octopus = { name = var.gateway_name serverApiUrl = var.octopus_api_url spaceId = var.octopus_space_id environments = var.octopus_environments # Reference the Octopus API key secret created above. serverAccessTokenSecretName = kubernetes_secret.octopus_api_key.metadata[0].name serverAccessTokenSecretKey = "OCTOPUS_SERVER_ACCESS_TOKEN" } argocd = { webUiUrl = var.argocd_web_ui_url } } }) ] timeout = 300 wait = true } ``` ::: ## Outputs ```yaml # outputs.tf output "argocd_namespace" { description = "Namespace where Argo CD is installed." value = kubernetes_namespace.argocd.metadata[0].name } output "gateway_namespace" { description = "Namespace where the Octopus Argo CD Gateway is installed." value = kubernetes_namespace.gateway.metadata[0].name } output "argocd_token_secret" { description = "Kubernetes secret (namespace/name) that holds the generated Argo CD API token." value = "${var.gateway_namespace}/${local.argocd_token_secret_name}" } output "get_argocd_admin_password" { description = "One-liner to retrieve the Argo CD initial admin password." value = "kubectl get secret argocd-initial-admin-secret -n ${var.argocd_namespace} -o jsonpath='{.data.password}' | base64 --decode && echo" } output "get_argocd_token" { description = "One-liner to view the stored Argo CD API token." value = "kubectl get secret ${local.argocd_token_secret_name} -n ${var.gateway_namespace} -o jsonpath='{.data.ARGOCD_AUTH_TOKEN}' | base64 --decode && echo" } ``` ## Terraform tfvars ```yaml # terraform.tfvars.example # Copy this file to terraform.tfvars and fill in the values. # Never commit terraform.tfvars to source control — it contains secrets. # ─── Kubernetes ─────────────────────────────────────────────────────────────── kubeconfig_path = "~/.kube/config" kube_context = "my-cluster-context" # omit to use the current context # ─── Argo CD ────────────────────────────────────────────────────────────────── argocd_namespace = "argocd" argocd_chart_version = "9.4.6" # External Web UI URL — used during Octopus registration for the Argo CD link. argocd_web_ui_url = "https://argocd.example.com" # Set to true if Argo CD uses a self-signed certificate. argocd_insecure = false # ─── Octopus Deploy ─────────────────────────────────────────────────────────── octopus_api_url = "https://my-instance.octopus.app" octopus_grpc_url = "my-instance.octopus.app:8443" octopus_api_key = "API-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" # sensitive octopus_space_id = "Spaces-1" # List of environment slugs or IDs to associate with this gateway. octopus_environments = ["production", "staging"] # Set to true only when Octopus runs without TLS on its gRPC port (dev only). octopus_grpc_plaintext = false # ─── Gateway ────────────────────────────────────────────────────────────────── gateway_namespace = "octopus-argocd-gateway" # only used if deploying the octopus-argocd-gateway using the helm-provider gateway_name = "my-argocd-gateway" gateway_chart_version = "1.23.0" ``` # Validating CaC PRs Source: https://octopus.com/docs/best-practices/platform-engineering/validating-cac-prs.md One of the challenges when implementing the [shared responsibility (or eventual consistency) model](/docs/platform-engineering/levels-of-responsibility) is the potential for complex conflicts to be introduced to the downstream repositories. Without any controls on what changes can be made to a downstream project, it may become impractical to continue to push changes downstream. One way to constrain the changes introduced to downstream CaC Git repositories is to automatically validate changes during a pull request (PR). This allows the platform team to introduce minimum requirements that all downstream CaC projects must adhere to while also allowing internal customers to customize their projects. ## Parsing OCL CaC projects persist their configuration in the [Octopus Configuration Language (OCL)](/docs/projects/version-control/ocl-file-format). This format is parsed by the [`@octopusdeploy/ocl`](https://github.com/OctopusDeploy/ocl.ts) JavaScript library. The `@octopusdeploy/ocl` library offers a low level parser that exposes individual OCL tokens. In addition, the library exposes a wrapper that allows the OCL data structure to be accessed via a read-only JavaScript object. This wrapped object can then be passed to any JavaScript library used to compare values or validate objects. ## Validating PRs with GitHub Actions The workflow shown below is an example that combines the `@octopusdeploy/ocl` and `expect` libraries to verify that the merge result of a CaC Git repository meets certain minimum requirements: ```yaml on: pull_request_target jobs: validate-ocl: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-node@v3 with: node-version: '20.x' - run: npm install @octopusdeploy/ocl - run: npm install expect - uses: actions/github-script@v7 with: script: | const {parseOclWrapper} = require("@octopusdeploy/ocl") const fs = require("fs") const path =require("path") const {expect} = require("expect"); /** * This function performs the validation of the Octopus CaC OCL file * @param ocl The OCL file to parse */ function checkPr(ocl) { // Read the file const fileContents = fs.readFileSync(ocl, 'utf-8') // Parse the file const deploymentProcess = parseOclWrapper(fileContents) // Verify the contents expect(deploymentProcess.step).not.toHaveLength(0) expect(deploymentProcess.step[0].name).toBe("Manual Intervention") expect(deploymentProcess.step[0].action[0].action_type).toBe("Octopus.Manual") } try { checkPr('./deployment_process.ocl') } catch (error) { console.log(error.matcherResult.message) process.exit(1) } ``` Let's break this workflow down. The workflow is triggered on the `pull_request_target` event. This event runs workflows from the target branch, typically the `main` branch, meaning the pull request is validated according to the rules in the mainline branch. This prevents pull requests from bypassing checks by modifying the workflow file: ```yaml on: pull_request_target ``` We start by checking out the Git repository contents: ```yaml - uses: actions/checkout@v3 ``` The workflow requires Node.js to be installed: ```yaml - uses: actions/setup-node@v3 with: node-version: '20.x' ``` The required libraries are installed via `npm`: ```yaml - run: npm install @octopusdeploy/ocl - run: npm install expect ``` The verification script is executed with the `actions/github-script` action: ```yaml - uses: actions/github-script@v7 with: script: | ``` The libraries are exposed to the script with `require` statements: ```javascript const {parseOclWrapper} = require("@octopusdeploy/ocl") const fs = require("fs") const path =require("path") const {expect} = require("expect"); ``` The verification logic is defined in the function called `checkPr` whose parameter is the name of the OCL file to parse: ```javascript /** * This function performs the validation of the Octopus CaC OCL file * @param ocl The OCL file to parse */ function checkPr(ocl) { ``` The file contents are read to a string and passed to the `parseOclWrapper` function: ```javascript // Read the file const fileContents = fs.readFileSync(ocl, 'utf-8') // Parse the file const deploymentProcess = parseOclWrapper(fileContents) ``` The `deploymentProcess` variable references a read-only object that allows the data stored in the OCL file to be accessed with standard dot notation. Here we use the [`expect`](https://jestjs.io/docs/expect) library, often used with unit tests, to verify the properties of the OCL file: ```javascript // Verify the contents expect(deploymentProcess.step).not.toHaveLength(0) expect(deploymentProcess.step[0].name).toBe("Manual Intervention") expect(deploymentProcess.step[0].action[0].action_type).toBe("Octopus.Manual") } ``` The final step is to call the `checkPr` function, catch any exceptions, and print them to the console: ```javascript try { checkPr('./deployment_process.ocl') } catch (error) { console.log(error.matcherResult.message) process.exit(1) } ``` ## Diagnosing validation errors The output of your validation script depends on the libraries used. The `expect` library is nice because it provides detailed differences between the expected and actual values. The end result of a failed validation looks something like this, where the JSON representation of the OCL data is presented as a diff showing which properties differed between the expected and input objects: ![GitHub Actions failure screenshot](/docs/img/platform-engineering/github-action-failure-example.png) ## Tips and tricks Because the validation process is plain JavaScript code you are free to implement any libraries and logic you need. The example below embeds a step OCL snippet as a string, parses the string, and uses the `toEqual` function to perform a deep comparison of the input OCL to the expected step: ```javascript const {parseOclWrapper} = require("@octopusdeploy/ocl") const fs = require("fs") const path =require("path") const {expect} = require("expect"); const LastStep = ` step "display-rest-api-id" { name = "Display REST API ID" action { action_type = "Octopus.Script" notes = "Displays the API Gateway ID created by the CloudFormation template." properties = { Octopus.Action.Script.ScriptBody = "echo \\"REST API ID: #{Octopus.Action[Create API Gateway].Output.AwsOutputs[RestApi]}\\"" Octopus.Action.Script.ScriptSource = "Inline" Octopus.Action.Script.Syntax = "Bash" } worker_pool = "hosted-ubuntu" } }` /** * This function performs the validation of the Octopus CaC OCL file * @param ocl The OCL file to parse */ function checkPr(ocl) { // Read the file const fileContents = fs.readFileSync(ocl, 'utf-8') // Parse the file const deploymentProcess = parseOclWrapper(fileContents) // Parse the fixed step defined above const requiredStep = parseOclWrapper(LastStep) // Verify the contents expect(deploymentProcess.step[deploymentProcess.step.length - 1]).toEqual(requiredStep.step[0]) } try { checkPr('./deployment_process.ocl') } catch (error) { console.log(error.matcherResult.message) process.exit(1) } ``` This example uses the [`lodash`](https://lodash.com/) library to clone the wrapper (because the wrapper is a read-only object) and remove the `name` property from both the template and actual OCL wrappers. This has the effect of comparing two OCL steps, but disregarding any changes to the step name: ```javascript const _ = require("lodash"); const {parseOclWrapper} = require("@octopusdeploy/ocl") const fs = require("fs") const path =require("path") const {expect} = require("expect"); const LastStep = ` step "display-rest-api-id" { name = "Display REST API ID" action { action_type = "Octopus.Script" notes = "Displays the API Gateway ID created by the CloudFormation template." properties = { Octopus.Action.Script.ScriptBody = "echo \\"REST API ID: #{Octopus.Action[Create API Gateway].Output.AwsOutputs[RestApi]}\\"" Octopus.Action.Script.ScriptSource = "Inline" Octopus.Action.Script.Syntax = "Bash" } worker_pool = "hosted-ubuntu" } }` /** * This function performs the validation of the Octopus CaC OCL file * @param ocl The OCL file to parse */ function checkPr(ocl) { // Read the file const fileContents = fs.readFileSync(ocl, 'utf-8') // Parse the file const deploymentProcess = parseOclWrapper(fileContents) // Parse the fixed step defined above const requiredStep = parseOclWrapper(LastStep) // Verify the contents const expectedWithoutName = _.cloneDeep(_.omit(requiredStep.step[0], ['name'])) const sourceWithoutName = _.cloneDeep(_.omit(deploymentProcess.step[deploymentProcess.step.length - 1], ['name'])) expect(sourceWithoutName).toEqual(expectedWithoutName) } try { checkPr('./deployment_process.ocl') } catch (error) { console.log(error.matcherResult.message) process.exit(1) } ``` # Installation Guidelines Source: https://octopus.com/docs/best-practices/self-hosted-octopus/installation-guidelines.md Octopus Deploy supports two hosting options: 1. Windows Server 1. Linux Container There are three components to an Octopus Deploy instance: - **Octopus Server nodes** These run the Octopus Server service. They serve user traffic and orchestrate deployments. - **SQL Server Database** Most data used by the Octopus Server nodes is stored in this database. - **Files or BLOB Storage** Some larger files - like [packages](/docs/packaging-applications/package-repositories), artifacts, and deployment task logs - aren't suitable to be stored in the database and are stored on the file system instead. This can be a local folder, a network file share, or a cloud provider's storage. This document will provide you with guidelines and recommendations for self-hosting Octopus Deploy. ## Supported Octopus Deploy Server Versions Each self-hosted major.minor release of Octopus Deploy will receive *critical patches and support* for a period of **six months.** For example, 2025.4 was released in December 2025 and will be supported through May 2026. All new releases of Octopus Deploy will run in Octopus Cloud first for at least one quarter. As a result, Octopus Cloud is always at least one version ahead of the self-hosted version. Because of that, we always recommend using the latest available release for your self-hosted installation of Octopus. Please see the [Octopus.com/downloads](https://octopus.com/downloads) to download the latest version of Octopus Deploy. For more details, please refer to our [blog post announcement from 2020](https://octopus.com/blog/releases-and-lts), when we introduced this release cadence. ## Host Octopus on Windows Server or as a Linux Container Our recommendation is to host Octopus Deploy Windows Server over the Octopus Server Linux Container unless you are okay with **all** these conditions: - You plan on using LDAP, Okta, Microsoft Entra ID, Google Workspace, or the built-in username and password to authenticate users. The current version of the Octopus Server Linux Container only supports Active Directory authentication via LDAP. - You are okay running at least one [worker](/docs/infrastructure/workers) to handle tasks typically done by the Octopus Server. The Octopus Server Linux Container doesn't include PowerShell Core or Python. - You are familiar with Docker concepts, specifically around debugging containers, volume mounting, and networking. - You are comfortable with one of the underlying hosting technologies for Docker containers; Kubernetes, ACS, ECS, AKS, EKS, or Docker Swarm. - You understand Octopus Deploy is a stateful, not a stateless application, requiring additional monitoring. :::div{.warning} Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method. ::: We are confident in the Octopus Server Linux Container's reliability and performance. After all, Octopus Cloud runs on the Octopus Linux container in AKS clusters in Azure. For more information, please see [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container). ## Create a single production instance One question we get asked a lot is "should we have a single instance to deploy to all environments or have an Octopus Deploy instance per environment?" Unless there is a business requirement, our recommendation is to have a single instance to deploy to all environments and use Octopus Deploy's RBAC controls to manage permissions. We recommend this to avoid the maintenance overhead involved with having an instance per environment. Of the customers who opt for an instance per environment, we see them have an instance for **development** and **test** environments with another instance for **staging** and **production** environments. If you chose this instance configuration, you would need a process to: - Clone all the variable sets and project variables, and notify you when a new scoped variable is added. - Sync the deployment and runbook processes, but skip over steps assigned to **development** and **test**. - Update any user step templates to the latest version. - Ensure the same lifecycle names exist on both instances but not have the same phases. - Copy releases but not deployments. - Clone all the project channels. - And more. Using the Octopus Deploy API, all of that is possible; however, it will require diligence and maintenance on your part. Unless there is a specific business requirement, such as security team requirements or regulatory requirements, we don't recommend taking that on. ## Run the Octopus Service, SQL server, and files on separate infrastructure It is possible to run the Octopus Deploy service, SQL Server, and file storage on the same Windows Server or container orchestrator. That is not recommended for production instances of Octopus Deploy. Aside from having a single point of failure, it can lead to performance issues. The database should run on a dedicated SQL Server, or using a managed SQL Server like AWS RDS, or Azure SQL. The files should be stored on a NAS, SAN, or on a managed file storage platform like Azure File Storage that supports SMB/CIFS. Shared services like these are managed by dedicated individuals, such as DBAs or Network Admins. They provide common day 2 maintenance such as backups, restoring, and performance monitoring. While it is possible to run SQL Server in container, we do not recommend it for production use. Whenever possible use a managed SQL Server like AWS RDS or Azure SQL, or use a SQL Server already managed by DBAs. ## Use a load balancer for the Octopus Deploy UI The Octopus Deploy UI is a stateless React single-page application that leverages a RESTful API for its data. We recommend using a load balancer for Octopus Deploy as soon as possible. This will make it much easier to move to high availability. In addition, it'll be easy to rebuild or change the underlying Windows Server or to move to Linux Containers in the future. Any standard load balancer, be it F5, Netscaler, or provided via a cloud provider, will work. If you need a small load balancer, NGINX will provide all the functionality you’ll need. The recommendations for load balancers are: - Start with round-robin or “least busy” mode. - SSL offloading for all traffic over port 443 is fine (unless you plan on using polling Tentacles over web sockets). - Use `/api/octopusservernodes/ping` to test service health. :::div{.hint} Octopus Deploy will return the name of the node in the Octopus-Node response header. ::: If you plan on having external polling Tentacles connect to your instance through a load balancer / firewall you will need to configure passthrough ports to each node. Our high availability guides provide steps on how to do this. ## Compute Recommendations The compute resources for your Octopus Deploy instance are dependent on the number of concurrent deployments and runbook runs. By default, Octopus Deploy processes five (5) deployments and runbooks concurrently. That is known as a task cap, and it is [configurable](/docs/support/increase-the-octopus-server-task-cap). The higher the task cap, the more compute resources your instance will need. :::div{.hint} The Octopus server will process additional tasks, such as applying retention policies, health checks, processing triggers, syncing community library step templates, and syncing Active Directory users and groups. Generally, deployments and runbook runs are the most computationally expensive tasks, which is why we use them to determine how many compute resources you need. ::: Use the following table below as a starting point for your compute resources. You are responsible for monitoring the resources consumed and ensuring your Octopus Deploy infrastructure isn't under or over-provisioned. | Task Cap Per Node | Windows Compute Resources | Container Compute Resources | Database on Virtual Machines | Azure DTUs | | ----------------- | ------------------------- | ---------------------------------- | ---------------------------- | ------------ | | 5 - 10 | 2 Cores / 4 GB RAM | 150m - 1000m / 1500 Mi - 3000 Mi | 2 Cores / 4 GB RAM | 50 DTUs | | 20 | 4 Cores / 8 GB RAM | 1000m - 2000m / 3000 Mi - 6000 Mi | 2 Cores / 8 GB RAM | 100 DTUs | | 40 | 8 Cores / 16 GB RAM | 1250m - 2500m / 4000 Mi - 8000 Mi | 4 Cores / 16 GB RAM | 200 DTUs | | 80 | 16 Cores / 32 GB RAM | 2000m - 4000m / 5000 Mi - 10000 Mi | 8 Cores / 32 GB RAM | 400 DTUs | | 160 | 32 Cores / 64 GB RAM | 3500m - 7000m / 6000 Mi - 12000 Mi | 16 Cores / 64 GB RAM | 800 DTUs | ## Use High Availability at scale While it is possible to configure a single-node instance to process 100+ deployments concurrently, it is not something we recommend. Once you go beyond 40 concurrent deployments, we recommend using Octopus Deploy's High Availability to scale horizontally. 40 to 80 concurrent deployments per Octopus Deploy node tends to the "sweet spot" for the maximum number of concurrent deployments. Please see our [implementation guide for High Availability](/docs/best-practices/self-hosted-octopus/high-availability) for more details. ## Further reading For further reading on installation requirements and guidelines for Octopus Deploy please see: - [Installation](/docs/installation) - [Requirements](/docs/installation/requirements) - [Permissions for Octopus Windows Service](/docs/installation/permissions-for-the-octopus-windows-service) - [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) - [Configuring the Octopus Task Cap](/docs/support/increase-the-octopus-server-task-cap) # Packaging a Service Fabric application Source: https://octopus.com/docs/deployments/azure/service-fabric/packaging.md The Service Fabric SDK contains PowerShell cmdlets for deploying an application from a given folder on disk. The Service Fabric application projects provide targets that can be accessed via MSBuild, or used directly from Visual Studio, to package the content of that folder. The scripts provided in these projects can also be used to deploy the resulting package, but require access to the original source code tree to access the PublishProfiles and ApplicationParameters. This guide will illustrate how the built in targets can be extended to produce a package that can be deployed using Octopus Deploy. ## Service Fabric solution/project files The Package target that is part of a Service Fabric application project is designed to produce a package folder containing the `ApplicationManifest.xml` file, plus a folder for each service. The content of this folder however is not enough to actually deploy a Service Fabric application. In order to perform a deployment, a PublishProfile and its corresponding ApplicationParameters file are required. When deploying straight from Visual Studio, the profile and parameters files are referenced from the source code, but when deploying through Octopus, they must be included in the NuGet/Zip package so they are available at deployment time. ## Packaging options There are a couple of options available to bring all required files together for the package. Illustrated below are two possible options. Both options are based off a build process that starts with the following MSBuild call (assumed to be executed from the solution's folder). ``` msbuild -t:Package MyFabricApplication\MyFabricApplication.sfproj ``` ### Build step The first option is to simply add another build step, using your build tool of choice, to copy the required PublishProfiles and ApplicationParameters files from the Service Fabric application folder to the _same_ folder that the above step outputs the package to. ```bash xcopy /I MyFabricApplication\PublishProfiles\*.xml MyFabricApplication\pkg\Release\PublishProfiles xcopy /I MyFabricApplication\ApplicationParameters\*.xml MyFabricApplication\pkg\Release\ApplicationParameters ``` ### Custom build targets Alternatively you could create a custom MSBuild targets file that does the file copying for you. One advantage of this option is that it also executes if you use "right-click > Package" in Visual Studio. To do this, create a custom targets file containing the following: ```xml $(PackageDependsOn); OctoSFPackage false ``` If we assume that this file was saved as OctoSFPackage.targets in a tools folder below the solutions folder, you then need to add the following line as the last child element of the Project element of the sfproj file. ```xml ``` Once this line is added to the sfproj file, the target will get executed whenever the Package target executes. The Package target gets executed when the MSBuild command above (which is what your build server would be calling) is run or when you right-click the application project in Visual Studio and select Package. ## Package for Octopus with the Octopus CLI Whichever option from above that you select, the objective is to get the `PublishProfiles` and the `ApplicationParameters` folders from the Service Fabric project into the same folder as its package output. The Octopus CLI can then be used to create a package that is compatible with the Octopus package feed. You can get the Octopus CLI from the [Octopus downloads](https://octopus.com/downloads) page.
PowerShell ```powershell octopus package zip create --id MyFabricApplication --version VERSION --base-path MyFabricApplication\pkg\Release --out-folder OUTPUT --include '**' ```
Bash ```bash octopus package zip create --id MyFabricApplication --version VERSION --base-path MyFabricApplication/pkg/Release --out-folder OUTPUT --include '**' ```
VERSION and OUTPUT are parameters provided by your build tool of choice, the exact syntax will depend on the tool. ## Final package structure Once you have finished packaging, the package structure should look similar to the following, including an `ApplicationManifest.xml` file at the root, `ApplicationParameters` and `PublishProfiles` folders, plus folders for your services: ``` /ApplicationParameters/ /PublishProfiles/ /YourService1/ /YourService2/ /ApplicationManifest.xml ``` This structure includes the standard package output from Visual Studio (from a _Right-click > Publish_) plus the `ApplicationParameters` and `PublishProfiles` folders taken from the Service Fabric project. # Add a certificate to Octopus Source: https://octopus.com/docs/deployments/certificates/add-certificate.md To add a certificate to Octopus, navigate to **Deploy ➜ Manage ➜ Certificates ➜ Add Certificate** :::figure ![Add certificate](/docs/img/deployments/certificates/images/add-certificate.png) ::: When selecting your certificate file for upload, it must be one of the [supported file-formats](/docs/deployments/certificates). :::div{.hint} **Security Recommendation: Scope your certificates to the appropriate environments** If your certificate contains a production private-key, it is strongly recommended to scope your certificate to the appropriate environment. This allows you to assign permissions based on environments, ensuring that only users with appropriate permissions in the scoped environments will be able to access the private-key. ::: # Run a script step Source: https://octopus.com/docs/deployments/custom-scripts/run-a-script-step.md Octopus also allows you to run standalone scripts as part of your deployment process. You can run a script on the Octopus Server, on [workers](/docs/infrastructure/workers/) or across the deployment targets in [tags](/docs/infrastructure/deployment-targets/target-tags). You can run scripts contained in a [package](/docs/deployments/packages/), in a git repository, or ad-hoc scripts you've saved as part of the [step](/docs/projects/steps). You can use all features we provide for [custom scripts](/docs/deployments/custom-scripts/), like [variables](/docs/deployments/custom-scripts/using-variables-in-scripts/), [passing parameters](/docs/deployments/custom-scripts/passing-parameters-to-scripts/), publishing [output variables](/docs/deployments/custom-scripts/output-variables), and [collecting artifacts](/docs/deployments/custom-scripts/#Customscripts-Collectingartifacts). ## Choosing where the script will run When adding a script you choose where the script will run, and in which context the script will run. The options will vary based on the infrastructure that's available to you. For instance, if you do not have any [workers](/docs/infrastructure/workers) configured you will see the following options: - Run on the Octopus Server - Run on the Octopus Server on behalf of each deployment target - Run on each deployment target (default) If you do have workers configured you will see the following options: - Run once on a worker - Run on a worker on behalf of each deployment target - Run on each deployment target (default) If you choose to run the step on a worker, you will also need to select which [worker pool](/docs/infrastructure/workers/worker-pools) Octopus should use for the step. Choosing the right combination of **Target** and **Roles** enables some really interesting scenarios. See below for some common examples: | Target | Roles | Description | Variables | Example scenarios | | ----------------- | ---------------------- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- | | Deployment target | `web-server` `app-server` | The script will run on each deployment target with either of the `web-server` or `app-server` roles | The variables scoped to the deployment target will be available to the script. For example, `Octopus.Machine.Name` will be the deployment target's name | Apply server hardening or ensure standard pre-requisites are met on each deployment target | | Octopus Server | | The script will run once on the Octopus Server | Scope variables to the Step in order to customize variables for this script | Calculate some output variables to be used by other steps or run a database upgrade process | | Octopus Server | `web-server` | The script will run on the Octopus Server on behalf of the deployment targets with the `web-server` role. The script will execute once per deployment target | The variables scoped to the deployment target will be available to the script. For example, `Octopus.Machine.Name` will be the deployment target's name | Remove web servers from a load balancer as part of a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus) where access to the load balancer API is restricted | ## Choosing where to source the script {#choosing-where-to-source-scripts} You may also select the source of the script, either: - An ad-hoc or inline script, saved as part of the step itself, or - A script file in a git repository, or - A script file inside a package (shown below). :::figure ![](/docs/img/deployments/custom-scripts/images/script-file-in-package.png) ::: :::div{.success} **Scripts from packages or git repository, versioning and source control** Using scripts from inside a package or a git repository are a great way to version and source control your scripts. (You can be assured the correct version of your script will be run when deploying each version of your application.) Both methods have benefits and suit different applications: choose the method best suited to your situation. ::: :::div{.hint} If you are storing your project configuration in a Git repository using the [Configuration as code feature](/docs/projects/version-control), you can source files from the same Git repository as your deployment process by selecting Project as the Git repository source. When creating a Release, the commit hash used for your deployment process will also be used to source the files. You can find more information about this feature in this [blog post on using Git resources directly in deployments](https://octopus.com/blog/git-resources-in-deployments). ::: :::div{.hint} When sourcing a script from a file inside a package you cannot choose to run the step before packages are acquired. ::: ## Passing parameters to scripts When you call external scripts (sourced from a file inside a package or git repository) you can pass parameters to your script. This means you can write "vanilla" scripts that are unaware of Octopus, and test them in your local development environment. Read about [passing parameters to scripts](/docs/deployments/custom-scripts/passing-parameters-to-scripts). :::figure ![](/docs/img/deployments/custom-scripts/images/5865636.png) ::: ## Referencing packages In addition to being able to [source the custom script from a package](#choosing-where-to-source-scripts), it is often desirable to reference other packages. Scenarios where this can be useful include: - Executing a utility contained in a package - Deploying a package in a manner for which there is no built-in steps available; for example pushing a package to a Content-Management-System - Performing tasks which require multiple packages. For example: - Executing `NuGet.exe` to push another package (e.g. `Acme.Web`) - Referencing multiple container images and performing `docker compose` :::figure ![Script Step Package References](/docs/img/deployments/custom-scripts/images/script-step-package-references.png) ::: Package references can be added regardless of whether the script is sourced inline, from a git repository or from a package. ### Package reference fields When adding a package reference, you must supply: #### Package ID The ID of the package to be referenced, or a variable-expression. #### Feed The feed the package is sourced from, or a variable-expression. #### Name {#package-reference-fields-name} A unique identifier for the package-reference. In general the Package ID is a good choice for the name. The reasons the Package ID may not be suitable as the name include: - The Package-ID may be bound to a variable-expression (e.g. `#{Acme.Package.Id}`). Some of the places the name is used are not suitable for variable-expressions. - In rare situations it may be desirable to reference multiple versions of the same package. In this case they would need to be given different names. #### Extract Whether the package should be extracted. See [below](#referencing-packages-package-files) for information on the package file locations. This will not be displayed for certain package-types (i.e. container images). This may also be bound to a variable-expression. :::figure ![Script Step Package References](/docs/img/deployments/custom-scripts/images/script-step-package-reference-add.png) ::: ### Accessing package references from a custom script Having added one or more package references, it's reasonable to assume you wish to do something with them in your custom script. #### Package variables Package-references contribute variables which can be used just as any other variable. These variables are (assuming a package-reference named `Acme`): | Variable name and description | Example | | ----------------------------- | ------- | | `Octopus.Action.Package[Acme].PackageId`
The package ID | *Acme* | | `Octopus.Action.Package[Acme].FeedId`
The feed ID | *feeds-123* | | `Octopus.Action.Package[Acme].PackageVersion`
The version of the package included in the release | *1.4.0* | | `Octopus.Action.Package[Acme].ExtractedPath`
The absolute path to the extracted directory (if the package is configured to be extracted) | *C:\Octopus\Work\20210821060923-7117-31\Acme* | | `Octopus.Action.Package[Acme].PackageFilePath`
The absolute path to the package file (if the package has been configured to not be extracted) | *C:\Octopus\Work\20210821060923-7117-31\Acme.zip* | | `Octopus.Action.Package[Acme].PackageFileName`
The name of the package file (if the package has been configured to not be extracted) | *Acme.zip* | The following PowerShell script example shows how to find the extracted path for a referenced package named `Acme`: ```powershell $ExtractedPath = $OctopusParameters["Octopus.Action.Package[Acme].ExtractedPath"] Write-Host "PWD: $PWD" Write-Host "ExtractedPath: $ExtractedPath" ``` #### Package files {#referencing-packages-package-files} If the package reference was configured to be extracted, then the package will be extracted to a subdirectory in the working-directory of the script. This directory will be named the same as the package-reference. For example, a package reference named `Acme` would be extracted to directory similar to `C:\Octopus\Work\20180821060923-7117-31\Acme` (this is obviously a Windows directory; a script executing on a Linux target may have a path such as `/home/ubuntu/.octopus/Work/20180821062148-7121-35/Acme`). If the package reference was _not_ configured to be extracted, then the un-extracted package file will be placed in the working directory. The file will be named as the package reference name, with the same extension as the original package file. For example, for a package reference named `Acme`, which resolved to a zip package, the file would be copied to a path such as `C:\Octopus\Work\20180821060923-7117-31\Acme.zip` (for Linux: `/home/ubuntu/.octopus/Work/20180821062148-7121-35/Acme.zip`). These locations were designed to be convenient for use from custom scripts, as the relative path can be predicted, e.g. `./Acme` or `./Acme.zip`. If the absolute path is required the variables above may be used. #### Docker image package variables In the scenario where your package reference is a Docker image some additional variables will be contributed. These variables are (assuming a package-reference named `Acme`): | Variable name and description | Example | |--------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------| | `Octopus.Action.Package[Acme].Image`
The fully qualified image name | *index.docker.io/Acme:1.4.0* | | `Octopus.Action.Package[Acme].Registry`
The URI of the registry from the feed where the image was acquired from | *index.docker.io* | | `Octopus.Action.Package[Acme].Version`
The version of the image included in the release | *1.4.0* | | `Octopus.Action.Package[Acme].Feed.UserName`
The username from the feed where the image was acquired from (if the feed is configured to use credentials) | *Alice* | | `Octopus.Action.Package[Acme].Feed.Password`
The password from the feed where the image was acquired from (if the feed is configured to use credentials) | *Password01!* | ## Older versions Scripts sourced from your Projects Git Repository was added in Octopus **2024.1**. In Octopus versions prior, the Git Repository source is not available. # Manual approvals Source: https://octopus.com/docs/deployments/databases/common-patterns/manual-approvals.md Building trust is critical when automating database deployments. You are working on a process that changes your database, and unlike code, you cannot merely destroy and recreate a database. Most database tooling Octopus Deploy integrates with provides the ability to generate a *what-if* report that is used for approvals. It should show the SQL statements the tool is about to run as seeing the actual SQL statements contributes to that building of trust. Additional information, such as release notes, also helps build trust. This section walks through the various features as well as the deployment process. The high-level overview of the process is: 1. Use database deployment tooling to generate the *what-if* report and create an [artifact](/docs/projects/deployment-process/artifacts). 2. Send notifications to approvers. 3. Pause the deployment using a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals). Approvers sign-in to Octopus Deploy, download the *what-if* report, review it, and give their approval. 4. Use database deployment tooling to deploy the database changes. 5. Once the deployment is complete, a notification of the deployment status is sent to the team. 6. In production, failures are sent to the DBAs. :::figure ![An image of the manual approval deployment process](/docs/img/deployments/databases/common-patterns/images/manual_approval_deployment_process.png) ::: Each step in this process requires several decisions. Each company we work with has their own set of business rules and regulations they must follow. Below are recommendations to help get you going. ## Generating the what-if report How the report is generated depends on the database tooling you chose. Below are links to some of the most popular tools documentation. - [DbUp Generate HTML Report](https://github.com/DbUp/DbUp/blob/master/docs/more-info/html-report/) - [Flyway Dry Runs](https://flywaydb.org/documentation/dryruns) - [RoundhousE Dry Run](https://github.com/chucknorris/roundhouse/wiki/ConfigurationOptions) - [SSDT/DacPac Deploy Report](https://docs.microsoft.com/en-us/sql/tools/sqlpackage?view=sql-server-ver15#deployreport-parameters-and-properties) - [Redgate SQL Change Automation Create Database Release](https://documentation.red-gate.com/sca4/deploying-database-changes/automated-deployments-with-sql-change-automation-projects/deploying-sql-change-automation-projects) - Please note: [Redgate's step template](https://library.octopus.com/step-templates/c20b70dc-69aa-42a1-85db-6d37341b63e3/actiontemplate-redgate-create-database-release) automatically creates artifacts for you. - [Redgate Oracle Deployment Suite](https://octopus.com/blog/database-deployment-automation-for-oracle-using-octopus-and-redgate-tools) The goal is to create a single file that can be uploaded as an [artifact](/docs/projects/deployment-process/artifacts) for the approvers to review. :::figure ![An artifact in Octopus Deploy](/docs/img/deployments/databases/common-patterns/images/manual_approval_artifacts.png) ::: ## Manual Interventions This document intentionally uses the word `approvers` instead of `DBAs` because in our experience, especially as everyone is learning the tooling and process, there will be different approvers for each environment. Having a script run `Drop Table` unintentionally even in `Development` can ruin a day. To prevent a bad script being run, the deployment process is paused using a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) for someone to look for scripts that might cause significant harm to the database. For lower environments, for instance, `Development`, `Test`, or `QA`, the approver might be a developer, lead developer, or database developer. On production level environments, `Staging`, `Pre-Prod`, or `Production`, the approvers are typically DBAs. :::div{.hint} We recommend you follow a crawl-walk-run approach for each team adopting database deployments. ::: The crawl phase is when a team starts adopting automated database deployments. During that time, there should be, at a minimum, two manual interventions. One for the lower environments that developers on the team approve, and another for production environments for DBAs to approve. All of these approvals allow everyone to gain experience with the process and tooling. That, in turn, builds trust. :::figure ![A deployment process with two manual interventions](/docs/img/deployments/databases/common-patterns/images/manual_approval_two_manual_interventions.png) ::: The walk phase is when the team has some experience, and they feel confident they aren't going to check in anything which causes harm to the database. It is common for the DBAs to have a separate team. They talk to the development teams, but they are not involved with the day to day goings-on. However, they still want manual interventions for any environments they are responsible for to ensure scripts won't cause outages. :::figure ![A deployment process with one manual intervention](/docs/img/deployments/databases/common-patterns/images/manual_approval_one_manual_intervention.png) ::: The run phase can be found in [this documentation](/docs/deployments/databases/common-patterns/automatic-approvals). The run phase is when all the approvers trust the tooling and the process. The approvers only want to be notified if specific commands, such as `drop table`, appear in a script. ## Involving DBAs earlier in the process Often, DBAs review scripts too late in the process. Having them review a script in `Production` is typically only a sanity check. If a deadline has to be met or promises have to be made, it is difficult for a DBA to stop it. Unless the DBA can prove, without a shadow of a doubt, the script contains a show-stopping bug, that release is going to `Production`. For `Production` deployments, the DBA is there to ensure the database doesn't accidentally get deleted. But having a DBA approve every change to `Development` isn't feasible. They'd spend all day, every day, approving and reviewing changes. :::div{.hint} We recommend a DBA review and approve scripts toward the end of the QA test effort. ::: Typically, when QA feels good about a release, they will sign off on a promotion to a `Staging` or `Pre-Prod` environment. It makes more sense for a DBA to approve a release to `Staging` or `Pre-Prod` rather than `Production`. Approving during a non-production deployment gives the DBA more time and less stress to approve a deployment. At the same time, it might make sense to review changes for both `Staging` and `Production`. We often see a new version deployed to `Staging` several times before going to `Production`. There is a significant difference between `Staging` and `Production`. In that case, add another *what-if* report step, but have it run in `Staging` and generate that report for `Production`. :::figure ![A deployment process with a delta report generated for production](/docs/img/deployments/databases/common-patterns/images/manual_approval_generate_delta_report_for_production.png) ::: You might have only `Test` and `Production`. In that case, you could add an `Approver` environment, that generates the *what-if* report for `Production` and has a manual intervention. ## Notifications Notifications come in many forms. With Octopus Deploy you have many options: - [Slack](https://library.octopus.com/step-templates/99e6f203-3061-4018-9e34-4a3a9c3c3179/actiontemplate-slack-send-simple-notification) - [Microsoft Teams](https://library.octopus.com/step-templates/110a8b1e-4da4-498a-9209-ef8929c31168/actiontemplate-microsoft-teams-post-a-message) - [Email](/docs/projects/built-in-step-templates/email-notifications) - [Custom Step Template](/docs/projects/custom-step-templates) Regardless of your notification preference, we recommend creating a variable set to store notification values. The variable set gives you the ability to create a standard set of messages any project can use :::figure ![](/docs/img/deployments/databases/common-patterns/images/manual_approval_notifications.png) ::: For easier approvals, the notification messages should include a deep link back to the release. That little change provides a nice quality of life improvement. The deep-link to the deployment summary is: `#{Notification.Base.Url}/app#/#{Octopus.Space.Id}/projects/#{Octopus.Project.Id}/releases/#{Octopus.Release.Number}/deployments/#{Octopus.Deployment.Id}?activeTab=taskSummary` That sample uses `Notification.Base.Url` instead of the system variable `Octopus.Web.BaseUrl`. That variable choice is intentional as our documentation for that value states: > Note that this (`Octopus.Web.BaseUrl`) is based off the server's ListenPrefixes and works in simple configuration scenarios. If you have a load balancer or reverse proxy this value will likely not be suitable for use in referring to the server from a client perspective, e.g. in email templates etc. A separate variable, such as `Notification.Base.Url` provides a lot more options. For example, you can set that to a publicly exposed URL the approvers can use to approve changes from home. ## Keeping the signal-to-noise ratio low Imagine a message is sent to the team for every deployment to `Development` and `Test`. At first, that is a good idea. But as time goes on, the number of deployments per day will increase. They are now deploying 20 times a day to each environment. Those notifications went from being useful to being noise. As experience is gained and trust is built, the number of notifications should go down. Our recommendation is to start sending notifications for every deployment, both successes, and failures. As time goes on, adjust that down to failures only. ## Example View a working example on our [samples instance](https://samples.octopus.app/app#/Spaces-106/projects/dbup-sql-server-cloud-region/deployments). # Database configuration Source: https://octopus.com/docs/deployments/databases/configuration.md Database deployments are often more complicated than deploying a web application or service as production databases are typically clusters or high-availability groups. They are often comprised of more than one node hidden behind a VIP (virtual IP address). :::figure ![](/docs/img/deployments/databases/configuration/images/common-database-with-vip.png) ::: Database deployment tooling doesn't need to run an executable directly on the database server. Instead, it needs to run somewhere that connects to the database over a specific port as specific user to run scripts: - SQL Server: `1433` - Oracle: `1521` - MySQL: `3306` - PostgreSQL: `5432` The user account running the scripts needs permission to modify the database. # Install Tentacles and Workers for database deployments Source: https://octopus.com/docs/deployments/databases/configuration/tentacle-and-worker-installation.md Do not install Tentacles directly on your database servers, instead, use [Workers](/docs/infrastructure/workers) or install Tentacles on jump boxes for database deployments. High-availability groups or clusters have 1 to N nodes, and the nodes are kept in sync by replication. You only need to deploy to the primary node, and replication will apply the changes to all the nodes. Installing a Tentacle on each node will not work as Octopus Deploy will see multiple Tentacles and attempt to deploy to multiple nodes. SQL PaaS, such as [AWS RDS](https://aws.amazon.com/rds/) and [Azure's SQL](https://azure.microsoft.com/en-us/services/sql-database/), are hosted database servers that don't allow anything to be installed on them. Don't use the Tentacles on your web or application servers. A recommended security practice is the principle of least privilege. The account used by the website to connect to the database server should have restricted permissions. For example, the website uses stored procedures; the account would only have permissions to execute those stored procedures. Whereas, the account used for deployments needs elevated permissions because that account needs to make schema changes. **Please note:** This document only covers the infrastructure side of database deployments. You still need to configure a [project](/docs/projects) in Octopus Deploy to handle the actual deployments. ## Workers We recommend using [Workers](/docs/infrastructure/workers) to handle all of your database deployments. Workers have several advantages: 1. You can run multiple deployments on them at the same time. 2. You can place multiple VMs into a worker pool. If a VM goes down during a deployment, another VM would step in and take its place. :::figure ![](/docs/img/deployments/databases/configuration/images/standard-database-worker-pool.png) ::: :::div{.hint} Workers were added in Octopus **2018.7.0.** ::: ### General Worker pool configuration We recommend having separate Worker pools per deployment type. Out of the box, the Worker in the default Worker pool is your Octopus Server, and we don't recommend running database deployments from your Octopus Server directly for the following reasons: 1. You often need to install additional tooling or SDKs unrelated to Octopus Deploy. 2. The database deployment tools might need to run on Linux while Octopus Deploy is running on Windows. 3. It can slow down other deployments because the Octopus Server will allocate resources for database deployments in addition to everything else. :::div{.hint} A worker can be assigned to more than one pool. ::: :::figure ![](/docs/img/deployments/databases/configuration/images/worker-pools-per-usage.png) ::: To create a new worker pool, go to **Infrastructure ➜ Worker Pools** and then click on the **Add Worker Pool** button: :::figure ![](/docs/img/deployments/databases/configuration/images/add-worker-pool.png) ::: In the modal dialog, add the name of the worker pool you wish to add: :::figure ![](/docs/img/deployments/databases/configuration/images/add-worker-pool-modal.png) ::: Once you click the **Save** button, you will be presented with the Worker Pool maintenance screen. Your options are: - Name: The name of the worker pool. - Default: Indicates if this is the default Worker pool. **Warning:** Changing this may lead to failed builds, as all tasks previously done in the old default pool will now be done on this pool. - Description: A brief description of the Worker pool. :::figure ![](/docs/img/deployments/databases/configuration/images/worker-pool-edit-dialog.png) ::: When you add a worker to the pool, you are given a choice of listening, polling, and in the case of Linux, SSH: :::figure ![](/docs/img/deployments/databases/configuration/images/add-worker-to-pool.png) ::: ### Using Worker pools in a deployment process After you've added a Worker pool, a new option will appear in the deployment process, giving you the option to run once on a Worker, and which Worker pool should be used. :::figure ![](/docs/img/deployments/databases/configuration/images/use-worker-in-deployment-process.png) ::: :::div{.hint} Certain steps do not let you to pick a Worker pool. That list includes **Deploy to IIS**, **Deploy a Windows Service**, and **Deploy a Package**. If you are using a step template that relies on that functionality, you need to use [jump boxes](#tentacles-on-a-jump-box). ::: ### Worker pool per environment after Octopus Deploy 2020.1 A common security practice is to leverage Active Directory service accounts. But each environment has its own service account. The account that deploys to **Development** is prevented from deploying to **Test**. The account that deploys to **Production** is prevented from deploying to **Development**. This is accomplished with integrated security and running the Octopus Tentacle [as a specific user account](/docs/infrastructure/deployment-targets/tentacle/windows/running-tentacle-under-a-specific-user-account). This approach needs a Worker pool per environment: :::figure ![](/docs/img/deployments/databases/configuration/images/worker-pool-per-environment.png) ::: To start, create a dedicated Worker pool for each environment: :::figure ![](/docs/img/deployments/databases/configuration/images/environment-specific-worker-pools.png) ::: In your project variables, or in your variable set, create a new variable. Click the **Change Type** option and select **Worker Pool**: :::figure ![](/docs/img/deployments/databases/configuration/images/worker-pool-variable-type.png) ::: Select the Worker pool: :::figure ![](/docs/img/deployments/databases/configuration/images/worker-pool-variable-type-selection.png) ::: With that option, you can scope Worker pools to specific environments: :::figure ![](/docs/img/deployments/databases/configuration/images/worker-pool-variable-per-environment.png) ::: In the deployment process, a new option has appeared under Worker pool **Runs on a worker from a pool selected via a variable**. Update the desired steps to use that variable: :::figure ![](/docs/img/deployments/databases/configuration/images/use-worker-pool-variable.png) ::: ### Worker pool per environment before Octopus Deploy 2020.1 If you are using a version of Octopus Deploy prior to 2020.1, the process is slightly different. To start, create a dedicated Worker pool for each environment: :::figure ![](/docs/img/deployments/databases/configuration/images/environment-specific-worker-pools.png) ::: Next, create cloud region deployment targets (a cloud region is a group of deployment targets). :::div{.hint} Cloud region deployment targets do not count against your license. ::: Create a cloud region for each environment. In this example, a new [target tag](/docs/infrastructure/deployment-targets/target-tags) called `DbWorker` was created for these cloud regions. This will help differentiate these new deployment targets. Make a note of the Worker pool for that cloud region, and select the one that matches your environment of choice: :::figure ![](/docs/img/deployments/databases/configuration/images/create-cloud-region.png) ::: When done, you will have a cloud region per environment: :::figure ![](/docs/img/deployments/databases/configuration/images/environment-cloud-regions.png) ::: The execution location will now be a target tag, which is why the `DbWorker` tag was created. That tells the deployment to use the new cloud region. The cloud region will use the Worker pool: :::figure ![](/docs/img/deployments/databases/configuration/images/cloud-region-execution-location.png) ::: That step needs to be repeated for each step in the process: :::figure ![](/docs/img/deployments/databases/configuration/images/process-with-cloud-region-targets.png) ::: When a release is performed, it will use the environment-specific Worker pool. In the example below, a new release to the **Test** environment was done using the **Test Database Worker Region**: :::figure ![](/docs/img/deployments/databases/configuration/images/release-with-cloud-region.png) ::: ### Database deployments with Tentacles on a jump box {#tentacles-on-a-jump-box} If you are using an older version of Octopus Deploy, or your license limits you to one worker, then you need to install Tentacles on a jump box. The jump box sits between Octopus Deploy and the Database Server VIP. The Tentacle is running as a [service account](/docs/infrastructure/deployment-targets/tentacle/windows/running-tentacle-under-a-specific-user-account) with the necessary permissions to make schema changes. The tooling you chose for database deployments is installed on the jump box: :::figure ![](/docs/img/deployments/databases/configuration/images/database-with-jump-box.png) ::: In the event of multiple domains, a jump box is needed per domain. This might be seen where there is a domain in local infrastructure and another domain in a cloud provider such as Azure. As long as port 10933 is open (for a listening Tentacle) or port 443 (for a polling Tentacle) Octopus will be able to communicate to the jump box. :::figure ![](/docs/img/deployments/databases/configuration/images/database-jump-box-multiple-domains.png) ::: It is possible to install many Tentacles on a single server. Please [managing multiple instances](/docs/administration/managing-infrastructure/managing-multiple-instances) for more information. ![](/docs/img/deployments/databases/configuration/images/database-jump-box-multiple-tentacles.png) # SQL Server deployments Source: https://octopus.com/docs/deployments/databases/sql-server.md There are a number of tools Octopus Deploy integrates with to deploy to SQL Server, and it can be a bit overwhelming to get started. This section will help get started on automating deployments to SQL Server. We have written a number of "iteration zero" blog posts that examine the benefits and approaches to automating database deployments: - [Why consider database deployment automation?](https://octopus.com/blog/why-consider-database-deployment-automation) - [Database deployment automation approaches](https://octopus.com/blog/database-deployment-automation-approaches) - [How to design an automated database deployment process](https://octopus.com/blog/designing-db-deployment-process) - [Automated database deployment process: case study](https://octopus.com/blog/use-case-for-designing-db-deployment-process) - [Implementing an automated database deployment process](https://octopus.com/blog/implementing-db-deployment-process) - [Pitfalls with rollbacks and automated database deployments](https://octopus.com/blog/database-rollbacks-pitfalls) ## Common deployment process patterns There is a learning curve with adopting automated database deployments, and that can lead to quite a bit of trepidation, after all, databases are the lifeblood of most applications. There are some common deployment patterns you can adopt to build trust and level-up tooling knowledge quickly. Learn more about [common patterns](/docs/deployments/databases/common-patterns). ## Permissions The database account used in the database deployment process needs enough permissions to make appropriate changes, but it should not have so much control it could damage an entire server. Learn more about [user permissions for SQL Server](/docs/deployments/databases/sql-server/permissions). ## Guides We have written a number of guides and blog posts on the various tooling Octopus Deploy interacts with. - [Docs: Deploying to SQL Server with Redgate SQL Change Automation](/docs/deployments/databases/sql-server/redgate) - [Docs: Deploying to SQL Server with a DacPac](/docs/deployments/databases/sql-server/dacpac) - [Blog: Using DbUp and Workers to Automate Database Deployments](https://octopus.com/blog/dbup-database-deployments) - [Blog: Deploying to SQL Server with Entity Framework Core](https://octopus.com/blog/will-it-deploy-episode-03) - [Blog: Ad hoc SQL data scripts](https://octopus.com/blog/database-deployment-automation-adhoc-scripts-with-runbooks) See working examples on our [samples instance](https://samples.octopus.app/app#/Spaces-106). ## Learn more - [Blog: Automated blue/green database deployments](https://octopus.com/blog/databases-with-blue-green-deployments) - [Blog: Using ad-hoc scripts in your database deployment automation pipeline](https://octopus.com/blog/database-deployment-automation-adhoc-scripts) # Google cloud CLI scripts Source: https://octopus.com/docs/deployments/google-cloud/run-gcloud-script.md Octopus Deploy can help you run scripts on targets within Google Cloud Platform. These scripts typically rely on tools being available when they execute. It is best that you control the version of these tools - your scripts will rely on a specific version that they are compatible with to function correctly. The easiest way to achieve this is to use an [execution container](/docs/projects/steps/execution-containers-for-workers) for your script step. If this is not an option in your scenario, we recommend that you provision your own tools on your worker. When executing a script against GCP, Octopus Deploy will automatically use your provided Google cloud account details to authenticate you to the target instance, or you can choose to use the service account associated with the target instance. This functionality requires the Google cloud (gcloud) CLI to be installed on the worker. ## Run a gcloud script step {#RunningGcloudScript} :::div{.hint} The **Run gcloud in a Script** step was added in Octopus **2021.2**. ::: Octopus Deploy provides a `Run gcloud in a Script` step type, for executing script in the context of a Google Cloud Platform instance. For information about adding a step to the deployment process, see the [add step](/docs/projects/steps) section. ![](/docs/img/deployments/google-cloud/run-gcloud-script/google-cloud-script-step-body.png) ## Learn more - How to create [Google cloud accounts](/docs/infrastructure/accounts/google-cloud) # Blue-green deployments in Octopus using Environments Source: https://octopus.com/docs/deployments/patterns/blue-green-deployments-with-octopus.md To implement [blue/green deployments](https://octopus.com/devops/software-deployments/blue-green-deployment/) in Octopus using [Environments](/docs/infrastructure/environments), create two environments - one for blue, and one for green: :::figure ![](/docs/img/deployments/patterns/blue-green-deployments/images/blue-green-create-envs.png) ::: When deploying, you can then choose which environment to deploy to - either blue or green. The dashboard will show which release is in each environment. :::figure ![](/docs/img/deployments/patterns/blue-green-deployments/images/blue-green-dashboard.png) ::: Configuring your [lifecycle](/docs/releases/lifecycles) will need to be done accordingly. You would typically have both your blue and green environments in a shared "Production/Staging" phase. :::figure ![](/docs/img/deployments/patterns/blue-green-deployments/images/blue-green-lifecycle.png) ::: ## Learn more - [View Blue/Green deployment examples on our samples instance](https://oc.to/PatternBlueGreenSamplesSpace). - [Change load-balancer group Runbook example](/docs/runbooks/runbook-examples/aws/change-load-balancer-group). - [Blue/Green deployment knowledge base articles](https://oc.to/BlueGreenTaggedKBArticles). - [Ask Octopus Episode: Blue/Green Deployments](https://www.youtube.com/watch?v=qFqoVwVzeo0) - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Rollbacks Source: https://octopus.com/docs/deployments/patterns/rollbacks.md Being able to roll back to a known good state of code is often just as important as deploying software. In our experience, rolling back to a previous release is rarely as simple as "re-deploying the last successful deployment." This section will walk you through the patterns and pitfalls you'll encounter when configuring a rollback process. ## Built-in rollback support Octopus Deploy supports rollbacks out of the box. It always keeps the two most successful releases in any given environment, making it easy to roll back to the previous version. In addition, you can configure [retention policies](/docs/administration/retention-policies) to keep more releases on your target machines. For example, Imagine you just deployed `1.1.21` to your **QA** servers. For whatever reason, that version does not work. You can re-deploy the previous version, `1.1.20`, to **QA** by going to that release and clicking on the **REDEPLOY** button. That scenario is supported by default; you won't have to configure anything else. :::div{.hint} Doing that will re-run the previous deployment process as it existed at release creation. It will re-extract any packages, re-run all the configuration transforms, re-run any manual intervention steps, etc. If it took an hour before, it would most likely retake an hour on re-deployment. ::: ## Ideal rollback scenarios It would be impossible to list every scenario in which a rollback will be successful, as each application is different. That being said, we have found rollbacks are most likely to succeed when one or more of the following is true. - Styling or markup only changes. - Code changes with no public interface or contract changes. - Zero to minimal coupling with other applications. - Zero to minimal coupled database changes (new index, tweaked view, store procedure performance improvement). - Number of changes since the last release is low. Rollbacks are much more complicated (if not impossible) when you have tightly coupled database and code changes, are doing a once-a-quarter release with 100s of changes, or the changes are tightly coupled with other applications. In those scenarios, we recommend **rolling forward**. ## Designing a rollback process Having the ability to roll back, even if rarely used, is a valuable option. What you don't want is to make up your rollback process in the middle of an emergency. If you want to have the ability to roll back, start thinking about what that process should look like now. Below are some questions to help get you started. - Who will trigger the rollback? Will it be automated or manual? - What platform are you using (Windows, Linux, Azure Web Apps, K8s, etc.)? Does it support multiple paths or versions? - When a rollback occurs, do you want to do a complete re-deployment of your application (including variable transforms)? - If you have manual interventions, should they run? - If you have database deployments, should they run? - Are there any other steps that should be skipped? - Should _any_ specific steps _only_ run during a rollback? For example, consider a project that has the following steps: 1. Run a runbook to create the database if not exists. 1. Deploy the database changes. 1. Deploy a service. 1. Deploy a website. 1. Pause deployment for manual verification of application. 1. Notify stakeholders of deployment. Re-running that deployment process as-is for a rollback could lead to data loss (depending on the database deployment tool). That same process during a rollback might be: 1. ~~Run a runbook to create the database if not exists.~~ 1. ~~Deploy the database changes.~~ 1. Deploy a service. 1. Deploy a website. 1. Pause deployment for manual verification of application. 1. Notify stakeholders of deployment. ### Calculating deployment mode When a release is deployed to an environment, there are three possible "Deployment Mode" scenarios. - **Deploy**: The first time the release is deployed to the environment. For example, `2021.2.1` is deployed. - **Rollback**: The previous version is re-deployed to the environment. For example, `2021.2.1` is rolled back to `2021.1.10`. - **Redeployment**: The same release is re-deployed to the environment. For example, `2021.2.1` is re-deployed the same environment because a new webserver was added. Calculating deployment mode is done by comparing the system variable `Octopus.Release.CurrentForEnvironment.Number` with the system variable `Octopus.Release.Number`. - When `Octopus.Release.CurrentForEnvironment.Number` is less than `Octopus.Release.Number` then the deployment mode is **Deploy**. - When `Octopus.Release.CurrentForEnvironment.Number` is greater than `Octopus.Release.Number` then the deployment mode is **Rollback**. - When `Octopus.Release.CurrentForEnvironment.Number` is equal to `Octopus.Release.Number` then the deployment mode is **Redeployment**. We have created the step template [Calculate Deployment Mode](https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5/actiontemplate-calculate-deployment-mode) to do that for you. ### Enabling and disabling steps based on deployment mode Once you know the deployment mode, you can enable or disable steps using [output variables](/docs/projects/variables/output-variables) and [variable run conditions](/docs/projects/steps/conditions/#variable-expressions). You can have steps run only on **Rollback**, only on **Deploy**, only on **Deploy** or **Redeployment**, or any other combination. The step template [Calculate Deployment Mode](https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5/actiontemplate-calculate-deployment-mode) includes a number of [output variables](/docs/projects/variables/output-variables). - **DeploymentMode**: Will be `Deploy`, `Rollback`, or `Redeploy`. - **Trigger**: This indicates if the deployment was caused by a deployment target trigger or a scheduled trigger. It will be `True` or `False`. - **VersionChange**: Will be `Identical`, `Major`, `Minor`, `Build`, or `Revision`. It also includes a number of output variables to use in variable run conditions. - **RunOnDeploy**: Only run the step when the DeploymentMode is **Deploy**. - **RunOnRollback**: Only run the step when the DeploymentMode is **Rollback**. - **RunOnRedeploy**: Only run the step when the DeploymentMode is **Redeploy**. - **RunOnDeployOrRollback**: Only run the step when the DeploymentMode is **Deploy** or **Rollback**. - **RunOnDeployOrRedeploy**: Only run the step when the DeploymentMode is **Deploy** or ** Re-deploy**. - **RunOnRedeployOrRollback**: Only run the step when the DeploymentMode is **Redeploy** or **Rollback**. - **RunOnMajorVersionChange**: Only run the step when the VersionChange is **Major**. - **RunOnMinorVersionChange**: Only run the step when the VersionChange is **Minor**. - **RunOnMajorOrMinorVersionChange**: Only run the step when the VersionChange is **Major** or **Minor**. - **RunOnBuildVersionChange**: Only run the step when the VersionChange is **Build**. - **RunOnRevisionVersionChange**: Only run the step when the VersionChange is **Revision**. The usage will be: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` ## Automatic trigger of rollbacks Using the [Octopus CLI](/docs/octopus-rest-api/octopus-cli/deploy-release), or [one of our step templates](https://library.octopus.com/step-templates/0dac2fe6-91d5-4c05-bdfb-1b97adf1e12e/actiontemplate-deploy-child-octopus-deploy-project) it is possible to automatically trigger a rollback process. While it is possible to automatically trigger a rollback, this is not something we recommend unless you have a robust testing suite and you've tested your rollback process multiple times. We recommend first manually triggering the rollback. Once you are confident in your rollback process, look into updating your process to be automatically triggered. ## Rollback considerations Once a rollback process is in place, you'll need to decide when to use it. Specifically, when an issue occurs, you must decide to roll forward or rollback. When making that decision, here are a few questions to ask. - Carefully reviewing the changelog, and answering "if this were reverted, what would happen?" - Were there any database schema changes? - Are there any external components/applications depending on this deployment? - How long have the changes "been live" for users to use? Will they notice if a rollback were to occur? ### Large changeset Rolling back a large changeset is much, much harder than rolling back a small changeset. When you roll back, you cannot pick a specific change in a specific application's binaries to roll back. Everything goes, or none of it goes. If you have made dozens and dozens of changes, attempting to untangle the web of what to roll back could take just as long as rolling forward. If it has been a month or more since the last release to **Production**, we recommend **rolling forward**. If it has been a few hours since the last release, for example, deploying to a **Test** or **QA** environment, then a **rollback** is suitable. ### Database rollbacks Rolling back code is much easier than rolling back a database **without data loss**. It becomes nearly impossible to roll back a database schema change once users start manipulating data. Consider the scenario in which a new table is added during a deployment. If you decide to roll back your application, you have two choices. 1. Delete the table (either via script or database restore). 2. Leave the table as-is. The previous version of the code _should_ run fine if the table is left as-is. After all, the previous code version wasn't aware of that table and won't reference it or insert data directly. However, there is no way to know for sure the code will work if the database changes weren't tested with the previous version of the code. A stored procedure, view, or function could now expect data in that new table. Restoring a backup will also result in data loss; any data changed by users since that backup will be lost. Restoring a database backup should be for disaster recovery or an emergency rollback. In the event you have a schema change in your database, we recommend **rolling forward**. ### Dependent applications In a perfect world, every service and project would be loosely coupled. While great in theory, the real world is often messy, and coupling exists. Services and their clients have an implied or explicit data contract and can be tightly coupled together. If either the service or the client violates that contract, a failure will occur. Imagine the scenario where a credit card service introduces a new endpoint in version `3.1.0`. Your application makes a change to leverage that new endpoint. If version `3.1.0` of the credit card service was rolled out along with your application and then rolled back to `3.0.0` a few days later, that endpoint would no longer exist. Any functionality your application depends on from that service would start failing. In the event you make a contract change, we recommend **rolling forward** unless all the dependent applications can be rolled back as well or have fault tolerance built-in to handle missing endpoints or unexpected results. ### Time since deployment A timer starts once a release is deployed. Once that timer reaches zero, the ability to roll back successfully becomes impossible with minimal user impact. The timer duration depends on the number of users and the day-to-day importance of the application. An application used by a dozen people once a day can be rolled back days or even a week after the last deployment. Meanwhile, an internal application used by everyone in the company for three hours a day might have a few business hours before a rollback becomes impossible. That is due to user perception. If a release with a new feature and several bug fixes is deployed, users _will_ notice when a rollback occurs. Either they will see the feature disappears, or a bug they thought was fixed happens again. Generally, unless a showstopping bug is found, limit rollbacks to outage windows. Once the userbase starts using the new release, we recommend **rolling forward**. ## Staging your deployments In our experience, deployments (and rollbacks) have the highest chance of success when deployed to the target environment in a "staging" area on your production servers. The deployment is then verified, and assuming verification passes, the "staging" area becomes live. If there is a problem, the deployment is aborted, and all the pre-existing configuration remains untouched. That is the core concept around deployment patterns: - [Blue/Green Deployments](https://martinfowler.com/bliki/BlueGreenDeployment.html) - [Red/Black Deployments](https://octopus.com/blog/blue-green-red-black) - [Canary Deployments](https://martinfowler.com/bliki/CanaryRelease.html) In addition, a lot of popular tools have similar concepts and provide the necessary tools. Some examples include: - [Azure Web App "Staging" slots](https://docs.microsoft.com/en-us/azure/app-service/deploy-staging-slots) - [Kubernetes Blue/Green Deployments](https://octopus.com/blog/deconstructing-blue-green-deployments) - [Canary Deployments on AWS Lambda Functions](https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-traffic-shifting/) # Rolling back a Tomcat deployment Source: https://octopus.com/docs/deployments/patterns/rollbacks/tomcat.md This guide will walk through rolling back a Java application deployed to a Tomcat web server. We will be using the [PetClinic](https://bitbucket.org/octopussamples/petclinic/src/master/) Spring Boot example. The PetClinic application consists of two components: - Database - Website Rolling back the database is out of scope for this guide. This [article](https://octopus.com/blog/database-rollbacks-pitfalls) describes reasons and scenarios in which rolling back a database could result in data loss or incorrect data. This guide assumes that there are no database changes or the changes are backward compatible. ## Parallel deployments in Apache Tomcat In Tomcat v7, Apache included the ability to do [parallel deployments](https://tomcat.apache.org/tomcat-9.0-doc/config/context.html#Parallel_deployment) which allows you to deploy multiple versions of the same application to a Tomcat server. If a version number is provided during deployment, Tomcat will combine it with the context path to rename the `.war` to `##.war`. Once the new version is in a running state, Tomcat will redirect new sessions to the new version of the application. Existing sessions will continue against the old version until they expire. This functionality is ideally suited for supporting rollback scenarios when deploying to Tomcat. :::div{.warning} The rollback strategy described in this guide will not work if the [undeployOldVersions](https://tomcat.apache.org/tomcat-9.0-doc/config/host.html) feature is enabled on Tomcat. ::: ## Existing deployment process For this guide, we'll start with an existing deployment process for deploying the PetClinic application: 1. Create Database If Not Exists 1. Deploy Database Changes 1. Deploy PetClinic Web App 1. Verify Deployment 1. Notify Stakeholders :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-original-deployment-process.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/01-petclinic-original/deployments/process). Please log in as a guest. ::: ## Zero-configuration rollback The easiest way to rollback to a previous version is to: 1. Find the release you want to roll back. 2. Click the **REDEPLOY** button next to the environment you want to roll back. That redeployment will work because a snapshot is taken when you create a release. The snapshot includes: - Deployment Process - Project Variables - Referenced Variables Sets - Package Versions Re-deploying the previous release will re-run the deployment process as it existed when that release was created. By default, the deploy package steps (such as deploy to IIS or deploy a Windows Service) will extract to a new folder each time a deployment is run, perform the [configuration transforms](/docs/projects/steps/configuration-features/structured-configuration-variables-feature/), and [run any scripts embedded in the package](/docs/deployments/custom-scripts/scripts-in-packages). :::div{.hint} Zero Configuration Rollbacks should work for most our customers. However, your deployment process might need a bit more fine-tuning. The rest of this guide is focused on disabling specific steps during a rollback process. ::: ## Simple rollback process While doing a rollback can be an operational exercise, the most typical reason for a rollback is something is wrong with the release, and you need to back out the changes. A bad release should also be [prevented from moving forward](/docs/releases/prevent-release-progression). The updated deployment process for a simple rollback would look like this: 1. Calculate Deployment Mode 1. Create Database If Not Exists (skip during rollback) 1. Deploy Database Changes (skip during rollback) 1. Deploy PetClinic Web App 1. Verify Deployment 1. Notify Stakeholders 1. Block Release Progression :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-simple-rollback-process.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/02-petclinic-simplerollback/deployments/process). Please log in as a guest. ::: ### Calculate deployment mode Calculate Deployment Mode is a [community step template](https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5/actiontemplate-calculate-deployment-mode) created by Octopus Deploy. It compares the release number being deployed with the current release number for the environment. When the release number is greater than the current release number, it is a deployment. When it is less, then it is a rollback. The step templates sets a number of [output variables](/docs/projects/variables/output-variables), including ones you can use in variable run conditions. ### Skipping database steps The two database steps, `Create Database If Not Exists` and `Deploy Database Changes` should be skipped for a rollback scenario. Rolling back database changes could result in data loss or interrupt testing operations. To skip these steps, we'll use one of the Variable Run Condition output variables from Calculate Deployment Mode step: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnDeploy} ``` When looking at the deployment process from the Process tab, there isn't a quick way to determine under which conditions a step will be executed. Using the `Notes` field for a step is an easy way to provide users information at a glance. :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-step-notes.png) ::: ### Block release progression Blocking Release Progression is an optional step to add to your rollback process. [The Block Release Progression](https://library.octopus.com/step-templates/78a182b3-5369-4e13-9292-b7f991295ad1/actiontemplate-block-release-progression) step template uses the API to [prevent the rolled back release from progressing](/docs/releases/prevent-release-progression). This step includes the following parameters: - Octopus Url: #{Octopus.Web.BaseUrl} (default value) - Octopus API Key: API Key with permissions to block releases - Release Id to Block: #{Octopus.Release.CurrentForEnvironment.Id} (default value) - Reason: This can be pulled from a manual intervention step or set to `Rolling back to #{Octopus.Release.Number}` This step will only run on a rollback; set the run condition for this step to: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` To unblock that release, go to the release page and click the **UNBLOCK** button. ## Complex rollback process In the simple rollback scenario, the `.war` file is redeployed, extracted, variable replacement is executed, the `.war` is repackaged before finally being sent to the Tomcat server webapps location. In cases where the `.war` is very large, the extraction and repackaging of the `.war` could take quite some time, making the rollback process lengthy. This is where the parallel deployments feature of Tomcat can benefit us as all the processes have already occurred during the initial deployment of that release. The new deployment process would look like this: 1. Calculate Deployment Mode 1. Rollback reason (only during rollback) 1. Create Database If Not Exists (skip during rollback) 1. Deploy Database Changes (only during deploy or redeploy) 1. Stop App in Tomcat (only when the previous release exists) 1. Deploy PetClinic Web App 1. Start App in Tomcat (only during rollback) 1. Verify the deployment 1. Notify Stakeholders 1. Block Release Progression (only during rollback) :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-complex-rollback-process.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/03-petclinic-complexrollback/deployments/process). Please log in as a guest. ::: Next, we'll go through the newly added and altered steps: ### Rollback reason The Rollback Reason is a [Manual Intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step that prompts the user for the reason they are rolling back. The text entered is stored in an output variable which will be used in the Block Release Progression step further down the process. ### Stop app in Tomcat Before we deploy a new version of our application, we first must stop the existing one. The Advanced Options section of the `Start/Stop App in Tomcat` step is where we specify which version of the application we're going to stop. For this guide, the version is identified as the previous release number, which is represented by the following variable. ``` #{Octopus.Release.CurrentForEnvironment.Number} ``` We also need to choose the option to **Stop the application**. :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-stop-application.png) ::: This step will fail if there isn't a previous release to stop, so we'll need to add a run condition only to run when a previous release exists. That can be represented by using the following run condition: ``` #{if Octopus.Release.CurrentForEnvironment.Number}True#{/if} ``` ### Deploy PetClinic Web App To configure our deployment to work with the parallel deployment feature, we need to set the deployment version of our application. This is done in the **Advanced Options** section of the **Deploy to Tomcat Via Manager*8 step. In the following screenshot, you will see the Octopus variable of `#{Octopus.Release.Number}` being used for the version number. :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-tomcat-advanced.png) ::: The radio button at the bottom gives you the option to have this deployment be in a `Running` state or a `Stopped` state; the default is `Running`. For this guide, we only want the Deploy step to occur on a Deployment or a Redeployment. The Calculate Deployment Mode step provides us with an output variable called `RunOnDeployOrRedeploy` that contains the correct statement for a variable run condition for this step. Add the following as the value for the Variable Run Condition on this step. ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnDeployOrRedeploy} ``` :::figure ![](/docs/img/deployments/patterns/rollbacks/tomcat/octopus-deploy-tomcat-run-condition.png) ::: ### Start app in Tomcat When executing the rollback, we'll need to start the previous version. This step is only required during a rollback scenario, so you'll need to add the following to the Variable Run Condition. ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` ### Block release progression The `Rollback Reason` step captures the reason for the rollback. We can pass the text entered in this step to the `Reason` field of this step by using the following output variable. ``` #{Octopus.Action[Rollback reason].Output.Manual.Notes} ``` :::div{.info} The retention policy of Octopus Deploy will clean up any old versions of the applications in folders controlled by Octopus. However, the `##.war` files are not controlled by Octopus and will not be cleaned up with a retention policy. To assist in Tomcat maintenance, the Octopus team developed the [Undeploy Tomcat Application via Manager](https://library.octopus.com/step-templates/34f13b4c-64e1-42b4-ad1a-4599f25a850e/actiontemplate-undeploy-tomcat-application-via-manager) step template. This template will remove the application using the specified context path and optional version number. ::: ## Choosing a rollback strategy It is our recommendation that you start with the simple rollback strategy, moving to the complex if you determine that the simple method doesn't suit your needs. # Preparing your Terraform environment Source: https://octopus.com/docs/deployments/terraform/preparing-your-terraform-environment.md When running Terraform on a local PC, the state of the resources managed by Terraform is saved in a local file. This state is queried to learn which resources already exist in order to properly apply updates and destroy resources. When Terraform is run by Octopus, this state file is not preserved between executions. This means a remote backend must be configured for almost all practical applications of Terraform through Octopus, allowing the state information to be preserved between Terraform steps. Refer to the [Terraform documentation](https://www.terraform.io/docs/backends/index.html) for more information on configuring backends. ## Caution on Terraform runs By default, Terraform stores state files [locally](https://developer.hashicorp.com/terraform/language/backend/local). If a remote backend is not configured, attempts to update or delete existing resources will fail because the **state file is lost between deployments**. We therefore recommend using a remote backend, such as [HCP Terraform](#hcp-terraform), when using Terraform with Octopus. There are many options for storing state files, you can learn more about storing state remotely in the [Terraform documentation](https://www.terraform.io/docs/backends/index.html). ## HCP Terraform [HCP Terraform](https://www.hashicorp.com/en/products/terraform) or Terraform Enterprise are Terraform execution platforms and remote backends. Using Terraform Enterprise or HCP Terraform for execution and/or state management can be achieved using the [Terraform cloud block](https://developer.hashicorp.com/terraform/language/block/terraform). ### Basic Example ```hcl terraform { cloud { organization = "my-org" workspaces { project = "Default Project" name = "base_layer" } } } ``` ### Common Example A common setup will be use a combination of Octopus environment variables, Octopus Project Variables, and hardcoded values. The below example shows the `organization` is inherited from an ENV variable in Octopus, the HCP Terraform project is derived from the Octopus project name, and the workspace name is derived from the project, environment, and a unique string. ```hcl # organization is inherited from ENV variable TF_CLOUD_ORGANIZATION terraform { cloud { workspaces { project = "#{Octopus.Project.Name}" # Workspace names must be unique across an entire HCP Terraform organization name = "#{Octopus.Project.Name}-base-layer-#{Octopus.Environment.Name}" } } } ``` Cloud block settings can be set via [environment variable](https://developer.hashicorp.com/terraform/language/block/terraform#tf_cloud_organization) and omitted from HCL: - `TF_CLOUD_ORGANIZATION` - `TF_CLOUD_PROJECT` - `TF_WORKSPACE` _note: if you set all 3 environment variables, a empty cloud block **must** exist in the hcl root configuration (ex: `terraform { cloud {} }`)._ ### Adding environment variables to Octopus Project You can add environment Variables to your Octopus project like this: :::figure ![setting environment variables in octopus project](/docs/img/deployments/terraform/preparing-your-terraform-environment/environment_variables.png) ::: ## Managed cloud accounts You can optionally prepare the environment that Terraform runs in using the details defined in accounts managed by Octopus. If an account is selected then those credentials do not need to be included in the Terraform template. Using credentials managed by Octopus is optional. These credentials can be saved directly into the Terraform template if that approach is preferable. Credentials defined in the Terraform template take precedence over any credentials defined in the step. The following pages provide instruction on creating cloud accounts: - [Azure accounts](/docs/infrastructure/accounts/azure) - [AWS accounts](/docs/infrastructure/accounts/aws) - [Google cloud accounts](/docs/infrastructure/accounts/google-cloud) ## Querying outputs from HCP Terraform You can query Terraform Enterprise for values from a remote state file using a data source referencing the `tfe_outputs` backend. A token is required and should be set as an ENV variable (`TFE_TOKEN`) ```hcl data "tfe_outputs" "previous_step_outputs" { organization = var.organization workspace = var.workspace } ``` # Windows Services Source: https://octopus.com/docs/deployments/windows/windows-services.md Octopus Deploy includes first class support for Windows Service deployments. Octopus can install, reconfigure, and start Windows Services during deployment, usually without requiring any custom scripts. When deploying, `sc.exe` is used to create a Windows Service using the configured settings. If the service already exists, it will be stopped, re-configured, and re-started. To deploy a Windows Service, add a *Deploy a Windows Service* step. For information about adding a step to the deployment process, see the [add step](/docs/projects/steps) section. ## Configuring the step {#WindowsServices-ConfiguringTheStep} :::figure ![Windows service configuration](/docs/img/deployments/windows/images/windows-service-configuration.png) ::: ### Step 1: Select a package {#WindowsServices-Step1-SelectAPackage} Use the Package Feed and Package ID fields to select the [package](/docs/packaging-applications) containing the executable (.exe) to be installed as a Windows Service. ### Step 2: Configure Windows Service options {#WindowsServices-Step2-ConfigureWindowsServiceOptions} | Field | Meaning | | ------------------- | ---------------------------------------- | | **Service Name** | The name of the Windows Service to create, or re-configure if it already exists. | | **Display Name** | Optional display name of the service. If empty, the Service Name will be used instead. | | **Description** | A short description of service that will appear in the services control manager. | | **Executable path** | The relative path to the executable in the package that the Windows Service will point to. Examples: `MyService.exe`, `bin\MyService.exe`, `foo\bin\MyService.exe`, `C:\Windows\myservice.exe` | | **Arguments** | Arguments that will always be passed to the service when it starts | | **Service account** | The account that the Windows Service should run under. Options are: Local System, Network Service, Local Service, Custom user (you can specify the username and password).

See below for [Security Considerations](#WindowsServices-SecurityConsiderations) and [Using Managed Service Accounts (MSA)](#WindowsServices-UsingManagedServiceAccounts(MSA)) | | **Start mode** | When will the service start: Automatic, Automatic (delayed), Manual, Disabled, Unchanged | | **State** | The state of the service after the deployment has completed | | **Dependencies** | Any dependencies that the service has. Separate the names using forward slashes (/). For example: `LanmanWorkstation/TCPIP` | ## How does Octopus actually deploy my Windows Service? {#WindowsServices-HowDoesOctopusActuallyDeployMyWindowsService?} Out of the box, Octopus will do the right thing to deploy your Windows Service, and the conventions we have chosen will eliminate a lot of problems with file locks, and leaving stale files behind. By default, Octopus will follow the conventions described in [Deploying packages](/docs/deployments/packages/) and apply the different features you select in the order described in [Package deployment feature ordering](/docs/deployments/packages/package-deployment-feature-ordering). :::div{.success} Avoid using the [Custom Installation Directory](/docs/projects/steps/configuration-features/custom-installation-directory) feature unless you are absolutely required to put your packaged files into a specific physical location on disk. ::: As an approximation including the Windows Service manager integration: 1. Acquire the package as optimally as possible (local package cache and [delta compression](/docs/deployments/packages/delta-compression-for-package-transfers)). 2. Stop your Windows Service if it is already running. Ensure that the user account running the Octopus Tentacle has the appropriate permissions to start\stop the Windows Service or this step may fail. 3. Create a new folder for the deployment (which avoids many common problems like file locks, and leaving stale files behind). 4. Example: `C:\Octopus\Applications\[Tenant name]\[Environment name]\[Package name]\[Package version]\` where `C:\Octopus\Applications` is the Tentacle application directory you configured when installing Tentacle). 5. Extract the package into the newly created folder. 6. Execute each of your [custom scripts](/docs/deployments/custom-scripts/) and the [deployment features](/docs/deployments/) you've configured will be executed to perform the deployment [following this order by convention](/docs/deployments/packages/package-deployment-feature-ordering).. 7. As part of this process Windows Service will be created, or reconfigured if it already exists, including updating the **binPath** to point to this folder and your executable entry point. 8. The service will be started based on the selected `State` option using the rules in the table below. 9. [Output variables](/docs/projects/variables/output-variables/) and deployment [artifacts](/docs/projects/deployment-process/artifacts) from this step are sent back to the Octopus Server. :::div{.success} You can see exactly how Octopus deploys your Windows Service by looking at the scripts in our [open-source Calamari](https://github.com/OctopusDeploy/Calamari) project which actually performs the deployment: - [Octopus.Features.WindowsService\_AfterPreDeploy.ps1](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Scripts/Octopus.Features.WindowsService_AfterPreDeploy.ps1) - [Octopus.Features.WindowsService\_BeforePostDeploy.ps1](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Scripts/Octopus.Features.WindowsService_BeforePostDeploy.ps1) You can inject your own logic into this process using [custom scripts](/docs/deployments/custom-scripts/) and understanding where your scripts will execute in the context of [package deployment feature ordering](/docs/deployments/packages/package-deployment-feature-ordering). ::: This table shows how the combination of the `Start Mode`, `State` and the state of any existing services determines is the service will be started or left stopped after the deployment is completed. | Start Mode | State | Existing Service Exists | Existing Service State | Resulting State | |-|-|-|-|-| | Disabled | n/a | n/a | n/a |Stopped | | Automatic / Automatic (delayed) | Started | n/a | n/a | Running | | Manual | Started | n/a | n/a | Running | | Unchanged | Started | n/a | n/a | Running | | Automatic / Automatic (delayed) | Stopped | n/a | n/a | Stopped | | Manual | Stopped | n/a | n/a | Stopped | | Unchanged | Stopped | n/a | n/a | Stopped | | Automatic / Automatic (delayed) | Unchanged | Exists | Running | Running | | Manual | Unchanged | Exists | Running | Running | | Unchanged | Unchanged | Exists | Running | Running | | Automatic / Automatic (delayed) | Unchanged | Exists | Stopped | Stopped | | Manual | Unchanged | Exists | Stopped | Stopped | | Unchanged | Unchanged | Exists | Stopped | Stopped | | Automatic / Automatic (delayed) | Unchanged | Does not exist | n/a | Stopped | | Manual | Unchanged | Does not exist | n/a | Stopped | | Unchanged | Unchanged | Does not exist | n/a | Stopped | | Automatic / Automatic (delayed) | Default | n/a | n/a | Running | | Manual | Default | n/a | n/a | Stopped | | Unchanged | Default | n/a | n/a | Stopped | ## Setting advanced configuration options {#WindowsServices-SettingAdvancedConfigurationOptions} Windows Services support some advanced settings not exposed by this feature. You can customize your Windows Service by including a `PostDeploy.ps1` [custom script](/docs/deployments/custom-scripts). This example configures the service **Failure Action** to **Restart.** **PostDeploy.ps1** ```powershell $serviceName = $OctopusParameters["Octopus.Action.WindowsService.ServiceName"] & "sc.exe" failure $serviceName reset= 30 actions= restart/5000 ``` :::div{.success} This script will run after the Windows Service has been created (or reconfigured), and then started. If you want to customize the Windows Service before it is started, set the Start Mode to Manual, and then start the Windows Service yourself as part of your custom script. ::: :::div{.success} **Using sc.exe** This Microsoft TechNet [article](https://technet.microsoft.com/en-us/library/cc754599.aspx) is a great reference on the sc.exe utility including the failure action above. ::: ## Deploying Services built with Topshelf {#WindowsServices-DeployingServicesBuiltWithTopshelf} Topshelf is a library to build and work with Windows Services easily by allowing your code to run (and be debugged) inside a Console Application, but giving you the option to install and run as a Windows Service. While Topshelf has its own command line options to make Service Registration easy, you can still use SC.EXE. This means that deploying a Topshelf enabled application as a Windows Service is easy using the Octopus service deploy feature. The only caveat is the value you specify in the Service Name parameter must match the Service Name specified in your Topshelf configuration code (in Program.cs) or the service will not start. ## Security considerations {#WindowsServices-SecurityConsiderations} You will need to consider carefully which Service Account you choose for your Windows Service. If you decide to use a Custom Account, you will need to make sure the Account is granted the **Logon as a Service** logon right (**SeServiceLogonRight**). When you use the Services snap-in console to configure your Windows Service, the **SeServiceLogonRight** logon right is automatically assigned to the account. If you use the Sc.exe tool or APIs to configure the account (like Octopus Deploy does on your behalf), the account has to be explicitly granted this right by using tools such as [Carbon PowerShell module](http://get-carbon.org/), the Security Policy snap-in (secpol.msc), `Secedit.exe`, or `NTRights.exe`. The built-in Windows Service accounts (`Local System`, `Network Service`, `Local Service`), and members of the **Local Administrators** group are assigned this right by default. ### Carbon PowerShell module {#WindowsServices-CarbonPowerShellModule} [Carbon](http://get-carbon.org/) is a PowerShell module that can be installed via [Chocolatey](https://chocolatey.org/packages/carbon), the PowerShell Gallery, or manually. For the PS gallery in PowerShell 5 or higher, you can run `Install-Module Carbon`, or to install manually visit their site and clone the repository or download a zip from the Releases page. Carbon PowerShell script example: ```ps # The Octopus variables below are just examples # Use your own #{RunAsUser} or similar variable name. # The SeServiceLogonRight lets accounts run as a service but isn't mutually exclusive # with the "DenyInteractiveLogon" policy to prevent regular users from abusing it # Check to see if the given account has the appropriate permission Test-Privilege -Identity #{ServiceAccountName} -Privilege SeServiceLogonRight # Returns False if account doesn't have permission Grant-Privilege -Identity #{ServiceAccountName} -Privilege SeServiceLogonRight # May not return anything Test-Privilege -Identity #{ServiceAccountName} -Privilege SeServiceLogonRight # Returns True once the account has permission # The SeBatchLogonRight lets accounts trigger scheduled tasks # Check to see if the given account has the appropriate permission Test-Privilege -Identity #{ServiceAccountName} -Privilege SeBatchLogonRight # Returns False Grant-Privilege -Identity #{BatchAccountName} -Privilege SeBatchLogonRight # May not return anything Test-Privilege -Identity #{ServiceAccountName} -Privilege SeBatchLogonRight # Returns True once the account has permission ``` ## Using managed service accounts (MSA) {#WindowsServices-UsingManagedServiceAccounts(MSA)} > Managed Service Accounts (MSA) allow you to eliminate those never-expire-service-accounts. An MSA is a special domain account that can be managed by the computer that uses it. That computer will change its password periodically without the need of an administrator. > > To configure the Windows Service to use a Managed Service Account: 1. Set the **Service account** to **Custom user...**. 2. Enter the domain name and username, **making sure to append a $ to the username** as shown below. 3. Bind the **Custom account password** to an **empty value** to ensure no password is set for this account - after all, we want the password managed by the server, not us. :::figure ![Windows service startup](/docs/img/deployments/windows/images/windows-service-startup.png) ::: :::div{.hint} **Important information about using Managed Service Accounts** There must be a dollar sign ($) at the end of the account name. When you use the Services snap-in console to configure your Windows Service, the **SeServiceLogonRight** logon right is automatically assigned to the account. If you use the Sc.exe tool or APIs to configure the account (like Octopus Deploy does on your behalf), the account has to be explicitly granted this right by using tools such as the [Carbon PowerShell module](http://get-carbon.org/), the Security Policy snap-in (secpol.msc), Secedit.exe, or NTRights.exe. Learn about [Managed Service Accounts](https://technet.microsoft.com/en-us/library/dd560633(v=ws.10).aspx). ::: # First Deployment Source: https://octopus.com/docs/getting-started/first-deployment.md 👋 Welcome to Octopus Deploy! This tutorial series will help you complete your first deployment in Octopus Deploy. We’ll walk you through the steps to deploy a sample [hello world package](https://octopus.com/images/docs/hello-world.1.0.0.zip) to one or more of your servers. :::div{.hint} If you're using **Octopus 2024.2** or earlier, please visit one of these legacy guides: - **Octopus 2022.2** or earlier [read the 2022 guide](/docs/getting-started/first-deployment/legacy-guide/2022) - **Octopus 2024.2** or earlier [read the 2024 guide](/docs/getting-started/first-deployment/legacy-guide/2024) ::: ## Before you start To follow this tutorial, you will need an Octopus Deploy instance. If you haven’t already, set up an instance using one of these methods: - [Octopus Cloud](https://octopus.com/free-signup) -> we host the Octopus Deploy instance for you, it connects to your servers. - [Self-hosted on a Windows Server](https://octopus.com/free-signup) -> you host it on your infrastructure by [downloading our MSI](https://octopus.com/download) and installing it onto a Windows Server with a SQL Server backend. Learn more about [our installation requirements](/docs/installation/requirements). - [Self-hosted as a Docker container](https://octopus.com/blog/introducing-linux-docker-image) -> you run Octopus Deploy in a Docker container. You will still need a [free license](https://octopus.com/free-signup). ## Log in to Octopus 1. Log in to your Octopus instance and click **New Project**. :::figure ![Get started welcome screen](/docs/img/getting-started/first-deployment/images/get-started.png) ::: ## Add project Projects let you manage software applications and services, each with their own deployment process. 2. Give your project a descriptive name, for example, `Hello world deployment`. Octopus lets you store your deployment process, settings, and non-sensitive variables in either Octopus or a Git repository. 3. For this example, keep the default **Octopus** option selected. 4. Leave the rest as is and click **Create Project**. :::figure ![Add new project screen](/docs/img/getting-started/first-deployment/images/add-new-project.png) ::: ## Add environments You'll need an environment to deploy to. Environments are how you organize your infrastructure into groups representing the different stages of your deployment pipeline. For example, Development, Staging, and Production. 5. Keep the default environments and click **Create Environments**. :::figure ![Environment selection options and deployment lifecycle visuals](/docs/img/getting-started/first-deployment/images/select-environments.png) ::: ## Create deployment process The next step is creating your deployment process. This is where you define the steps that Octopus uses to deploy your software. For this deployment, we will configure one step to print _Hello World_. 1. In the "Welcome to your Project" dialog, click **Thanks, got it**. Octopus lands you in the process step template library. 2. In the **Featured** category, locate the Run a Script card and click **Add Step**. :::figure ![Add Run a Script step to deployment process](/docs/img/getting-started/first-deployment/images/run-script-step.png) ::: ### Step Name You can leave this as the default _Run a Script_. ### Script Source You can source script files via 3 methods: - Inline script (default) - Git repository - Package 3. Select **Inline script** as your script source. :::figure ![Script source expander where users can select where to source scripts from](/docs/img/getting-started/first-deployment/images/script-source.png) ::: ### Inline Source Code 4. Select an appropriate script language. 5. **Copy** the script below and **paste** it into the source code editor. ``` Write-Host "Hello, World!" ``` :::figure ![Inline source code expander where users can type a script](/docs/img/getting-started/first-deployment/images/inline-source-code.png) ::: ### Execution Location 6. If you’re using Octopus Cloud, select **Run once on a worker**. 7. If you’re using a self-hosted Octopus instance, select **Run once on the Octopus Server**. :::figure ![Execution location expander where users can select where this step will run](/docs/img/getting-started/first-deployment/images/execution-location.png) ::: ### Worker Pool 8. If you’re using Octopus Cloud, keep the default **Runs on a worker from a specific worker pool** option selected. :::div{.hint} If you’re using Octopus Cloud and a Bash script language, select the Hosted Ubuntu option from the dropdown. The Default Worker Pool is running Windows and doesn’t have Bash installed. ::: :::figure ![Execution location expander where users can select a worker pool](/docs/img/getting-started/first-deployment/images/worker-pool.png) ::: You can skip the other sections of this page for this tutorial. **Save** your step and you can move on to create and deploy a release. ## Release and deploy ### Create release A release is a snapshot of the deployment process and the associated assets (packages, scripts, variables) as they exist when the release is created. 1. Click the **Create Release** button. 2. Click **Save**. ### Execute deployment Deployments typically occur in a defined environment order (for example, Development ➜ Staging ➜ Production), starting with the first one. Later you can configure Lifecycles with complex promotion rules to accurately reflect how you want to release software. 1. Click the **Deploy to Development…** button to deploy to the development environment. 2. Review the preview summary and when you’re ready, click **Deploy**. The **Task Log** will show you in real-time the tasks Octopus is taking to run your Hello World script. :::figure ![Successful first deployment message](/docs/img/getting-started/first-deployment/images/successful-deployment.png) ::: You successfully completed your first deployment! 🎉 Up next, we’ll introduce you to the [power of variables](/docs/getting-started/first-deployment/define-and-use-variables). :::div{.hint} **⭐ Favoriting projects** To easily navigate back to a project, you can mark it as a favorite by clicking the star icon next to the project name. Favorited projects will appear in the Deploy menu. :::figure ![How to favorite a project](/docs/img/getting-started/first-deployment/images/add-favourite-project.png) ::: ### All guides in this tutorial series 1. First deployment (this page) 2. [Define and use variables](/docs/getting-started/first-deployment/define-and-use-variables) 3. [Approvals with manual interventions](/docs/getting-started/first-deployment/approvals-with-manual-interventions) 4. [Add deployment targets](/docs/getting-started/first-deployment/add-deployment-targets) 5. [Deploy a sample package](/docs/getting-started/first-deployment/deploy-a-package) ### Further Reading - [Deployment Examples](/docs/deployments) # First Deployment (2022.2 and below) Source: https://octopus.com/docs/getting-started/first-deployment/legacy-guide/2022.md :::div{.hint} **Other versions of this guide** * **Octopus 2024.3** or newer [First deployment guide](/docs/getting-started/first-deployment) * **Octopus 2022.3** or newer [First deployment guide](/docs/getting-started/first-deployment/legacy-guide/2024) ::: This tutorial will help you complete your first deployment in Octopus Deploy. It will walk through the steps to deploy a sample [hello world package](https://octopus.com/images/docs/hello-world.1.0.0.zip) to one or more of your servers. The only prerequisite is a running Octopus Deploy instance, either in Octopus Cloud or self-hosted. The tutorial assumes you have a brand new instance running and will walk through the rest of the setup, including configuring deployment targets. This tutorial will take between **25-35 minutes** to complete, with each step taking between **2-5** minutes to complete. 1. [Configure environments](/docs/getting-started/first-deployment/legacy-guide/2022/configure-environments) 2. [Create a project](/docs/getting-started/first-deployment/legacy-guide/2022/create-projects) 3. [Define the deployment process](/docs/getting-started/first-deployment/legacy-guide/2022/define-the-deployment-process) 4. [Create a release and deploy it](/docs/getting-started/first-deployment/legacy-guide/2022/create-and-deploy-a-release) 5. [Define and use variables](/docs/getting-started/first-deployment/define-and-use-variables) 6. [Approvals with manual interventions](/docs/getting-started/first-deployment/approvals-with-manual-interventions) 7. [Add deployment targets](/docs/getting-started/first-deployment/add-deployment-targets) 8. [Deploy a package to the deployment targets](/docs/getting-started/first-deployment/deploy-a-package) Before starting the tutorial, if you haven't set up an Octopus Deploy instance, please do so by picking from one of the following options: - [Octopus Cloud](https://octopus.com/free-signup) -> we host the Octopus Deploy instance for you, it connects to your servers. - [Self-hosted on a Windows Server](https://octopus.com/free-signup) -> you host it on your infrastructure by [downloading our MSI](https://octopus.com/download) and installing it onto a Windows Server with a SQL Server backend. Learn more about [our installation requirements](/docs/installation/requirements). - [Self-hosted as a Docker container](https://octopus.com/blog/introducing-linux-docker-image) -> you run Octopus Deploy in a Docker container. You will still need a [free license](https://octopus.com/free-signup). When you have an instance running, go to the [configure environments page](/docs/getting-started/first-deployment/legacy-guide/2022/configure-environments) to get started. **Further Reading** This tutorial will deploy a sample package to your servers. If you prefer to skip that and start configuring Octopus Deploy for your application right away, please see: - [Customizable End-to-End CI/CD pipeline tutorial](https://octopus.com/docs/guides) - [Deployment Examples](/docs/deployments) # Configure Environments Source: https://octopus.com/docs/getting-started/first-deployment/legacy-guide/2022/configure-environments.md [Getting Started - Create Environments](https://www.youtube.com/watch?v=tPb6CLHyNLA) Octopus organizes the servers and services where you deploy your software into environments. Typical environments are **Dev**, **Test**, and **Production**, and they represent the stages of your deployment pipeline. :::figure ![Typical environments in the Octopus Web Portal](/docs/img/shared-content/concepts/images/environments.png) ::: 1. To create an environment, in the Octopus Web Portal, navigate to **Infrastructure ➜ Environments** and click **ADD ENVIRONMENT**. 1. Give your new environment a meaningful name, for instance, *Test*, and click **SAVE**. You now have your first environment. The next step will [create a project](/docs/getting-started/first-deployment/legacy-guide/2022/create-projects). **Further Reading** For further reading on environments in Octopus Deploy please see: - [Environments](/docs/infrastructure/environments) - [Deployment Documentation](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # First Deployment Source: https://octopus.com/docs/getting-started/first-deployment/legacy-guide/2024.md 👋 Welcome to Octopus Deploy! This tutorial will help you complete your first deployment in Octopus Deploy. We'll walk you through the steps to deploy a sample [hello world package](https://octopus.com/images/docs/hello-world.1.0.0.zip) to one or more of your servers. All you'll need is a running Octopus Deploy instance. :::div{.hint} **Other versions of this guide** * **Octopus 2022.2** or earlier [First deployment guide](/docs/getting-started/first-deployment/legacy-guide/2022) * **Octopus 2024.3** or newer [First deployment guide](/docs/getting-started/first-deployment) ::: Before starting the tutorial, if you haven't set up an Octopus Deploy instance, please do so by picking from one of the following options: - [Octopus Cloud](https://octopus.com/free-signup) -> we host the Octopus Deploy instance for you, it connects to your servers. - [Self-hosted on a Windows Server](https://octopus.com/free-signup) -> you host it on your infrastructure by [downloading our MSI](https://octopus.com/download) and installing it onto a Windows Server with a SQL Server backend. Learn more about [our installation requirements](/docs/installation/requirements). - [Self-hosted as a Docker container](https://octopus.com/blog/introducing-linux-docker-image) -> you run Octopus Deploy in a Docker container. You will still need a [free license](https://octopus.com/free-signup). Log in to your Octopus instance to get started. You'll be guided through the process of deploying your first application. Click **GET STARTED** to begin your deployment journey. :::figure ![GET STARTED](/docs/img/getting-started/first-deployment/legacy-guide/images/img-getstarted.png) ::: ## Add project Projects let you manage software applications and services, each with its own deployment process. Give your project a descriptive name and click **SAVE**. :::figure ![Add project](/docs/img/getting-started/first-deployment/legacy-guide/images/img-addprojectdialog.png) ::: ## Add Environments You will need an environment to deploy to. Environments are how you organize your infrastructure, on prem or cloud, into groups that represent the different stages of your deployment pipeline, for instance, dev, test, and production. Select the environments you'd like to create and click **SAVE**. ![Add Environments](/docs/img/getting-started/first-deployment/legacy-guide/images/img-createenvironmentdialog.png) ## Project questionnaire You have the option to fill out a short survey. This helps our team learn about the technologies our customers are using, which guides the future direction of Octopus. It should only take about 30 seconds to complete. Click **SUBMIT** and you will be taken to your project. :::figure ![Project questionnaire](/docs/img/getting-started/first-deployment/legacy-guide/images/img-questionnairedialog.png) ::: ## Create deployment process The next step in the journey is to create your deployment process. This is where you define the steps that Octopus uses to deploy your software. For our simple Hello World deployment, we will configure one step to print _Hello World_. :::figure ![Create deployment process](/docs/img/getting-started/first-deployment/legacy-guide/images/img-createdeploymentprocess.png) ::: 1. Click **CREATE PROCESS** to see the available deployment steps. 2. Select the **Script** tile to filter the types of steps. 3. Scroll down and click **ADD** on the **Run a Script** tile. 4. Accept the default name for the script. 5. In the Execution Location section, select **Run once on a worker** if you're using Octopus Cloud, or **Run once on the Octopus Server** if you're using a self-hosted Octopus instance. 6. Scroll down to the Script section, select your script language of choice, and enter the following script in the Inline Source Code section. ``` Write-Host "Hello, World!" ``` :::div{.hint} If you are using Octopus Cloud, Bash scripts require you to select the Hosted Ubuntu worker pool. The Default Worker Pool is running Windows and doesn't have Bash installed. ::: 7. Click **SAVE** to save the step. ## Release and deploy Next, we will create a release that we can deploy to our environments. A release is a snapshot of the deployment process and the associated assets (packages, scripts, variables) as they exist when the release is created. :::figure ![Release and deploy](/docs/img/getting-started/first-deployment/legacy-guide/images/img-createrelease.png) ::: 1. Click **CREATE RELEASE**. 2. Give the release a version number such as 0.0.1. You have the option to add release notes. Click **SAVE**. 3. After reviewing the details, click **DEPLOY TO DEVELOPMENT**. 4. This screen gives you the details of the release you are about to deploy. Click **DEPLOY** to start the deployment. The deployment progress page displays a task summary. If you click **TASK LOG**, you'll see the steps Octopus is taking to execute your hello world script. Success, you have finished your first deployment! 🎉 :::figure ![First deployment](/docs/img/getting-started/first-deployment/legacy-guide/images/img-successfulrelease.png) ::: Take a moment to enjoy your first deployment! The next step will introduce you to the power of variables. Subsequent pages in the guide: 1. [Define and use variables](/docs/getting-started/first-deployment/define-and-use-variables) 2. [Approvals with manual interventions](/docs/getting-started/first-deployment/approvals-with-manual-interventions) 3. [Add deployment targets](/docs/getting-started/first-deployment/add-deployment-targets) 4. [Deploy a sample package](/docs/getting-started/first-deployment/deploy-a-package) **Further Reading** This tutorial will deploy a sample package to your servers. If you prefer to skip that and start configuring Octopus Deploy for your application right away, please see: - [Customizable End-to-End CI/CD pipeline tutorial](https://octopus.com/docs/guides) - [Deployment Examples](/docs/deployments) # Configure Runbook Environments Source: https://octopus.com/docs/getting-started/first-runbook-run/configure-runbook-environments.md [Getting Started - Create Environments](https://www.youtube.com/watch?v=tPb6CLHyNLA) Octopus organizes the servers and services where you deploy your software into environments. Typical environments are **Dev**, **Test**, and **Production**, and they represent the stages of your deployment pipeline. :::figure ![Typical environments in the Octopus Web Portal](/docs/img/shared-content/concepts/images/environments.png) ::: 1. To create an environment, in the Octopus Web Portal, navigate to **Infrastructure ➜ Environments** and click **ADD ENVIRONMENT**. 1. Give your new environment a meaningful name, for instance, *Test*, and click **SAVE**. You now have your first environment. :::div{.hint} Try to reuse the same environments as your deployments whenever possible. You will often run runbooks on the same deployment targets as your deployment process. Creating runbook only environments can saturate your dashboards and lifecycles. If you need to have an environment for runbooks, we recommend limiting it to one or two environments at most with a name similar to `Maintenance`. ::: The next step will [create a project to house the runbook](/docs/getting-started/first-runbook-run/create-runbook-projects). **Further Reading** For further reading on deployment targets in Octopus Deploy please see: - [Deployment Targets](/docs/infrastructure/deployment-targets) - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # Azure Source: https://octopus.com/docs/infrastructure/accounts/azure.md You can deploy software to the Azure cloud by adding your Azure subscription to Octopus. With an active Azure subscription, you can use Octopus to deploy to [Azure Cloud Service](/docs/infrastructure/deployment-targets/azure/cloud-service-targets/) targets, [Azure Service Fabric](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets/) targets, and [Azure Web App](/docs/infrastructure/deployment-targets/azure/web-app-targets) targets. Before you can deploy software to Azure, you need to add your Azure subscription to Octopus Deploy. ## Microsoft Entra ID account authentication method {#CreatingAnAzureAccount-AuthenticationMethod} When you add an Azure account to Octopus, there are two ways to authenticate with Azure in Octopus. These represent the different interfaces in Azure, and the interface you need will dictate which authentication method you use. - [Service Principal](#azure-service-principal) (default) is used with resource manager mode (ARM), along with the `az` command line interface. - [Management certificate](#azure-management-certificate) is used with service management mode (ASM) or legacy mode. You can read about the differences in [this document](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/deployment-models). Microsoft Entra ID Service Principal accounts are for use with the **Azure Resource Management (ARM) API** only. Configuring your Octopus Server to authenticate with the service principal you create in Microsoft Entra ID will let you configure finely grained authorization for your Octopus Server. :::div{.warning} Management Certificates are used to authenticate with Service Management APIs, those are being deprecated by Microsoft. See our [blog post](https://octopus.com/blog/azure-management-certs) for more details. Instructions remain only for legacy purposes. Please migrate to service principals as soon as possible. ::: ## Creating an Microsoft Entra ID Service Principal account {#azure-service-principal} Before creating an Octopus Microsoft Entra ID Service Principal account, you will need an Microsoft Entra ID App Registration. If you do not currently have an Microsoft Entra ID App Registration follow the [App Registration](https://oc.to/create-azure-app-registration) guide, or create it with a [script](#create-app-registration-via-script). After creating the App Registration, make a note of the following: - **Subscription ID** - **Tenant ID** - **Application ID** There are two supported types of credentials to allow your Octopus instance to authenticate with an Microsoft Entra ID Service Principal: Client Secrets and Federated Credentials. ### Create a client secret credential for an Microsoft Entra ID Service Principal To manually create a client secret follow the [Add a client secret](https://oc.to/create-azure-credentials) section in the Microsoft Entra ID documentation, or create it with a [script](#create-a-client-secret-via-script). Following this process you will be given the client secret, make a note of this as you cannot retrieve it afterward. Next, you need to configure your [resource permissions](#resource-permissions). ### Create a federated credential for a Microsoft Entra ID Service Principal :::div{.warning} Support for OpenID Connect authentication to Microsoft Entra ID requires Octopus Server version 2023.4 ::: To use OpenID Connect to authenticate with Microsoft Entra ID, you will need to create a federated credential for the Microsoft Entra ID Service Principal #### Octopus Server configuration To use OpenID Connect authentication you have to follow the [required minimum configuration](/docs/infrastructure/accounts/openid-connect#configuration). #### Microsoft Entra ID Service Principal configuration To manually create a Federated Credential follow the [Add a federated credential](https://oc.to/create-azure-federated-credentials) section in the Microsoft Entra ID documentation, or create it with a [script](#create-federated-credential-via-script). For more information on configuring external identity providers see [Configure an app to trust an external identity provider](https://oc.to/configure-azure-identity-providers). The federated credential will need the **Issuer** value set to the publicly accessible Octopus Server URI configured in the previous step, this value must also not have a trailing slash (/), for example `https://samples.octopus.app`. Please read [OpenID Connect Subject Identifier](/docs/infrastructure/accounts/openid-connect#subject-keys) on how to customize the **Subject** value. The **Audience** value can be left at the default, or set to a custom value if needed. #### Azure Tool support for OpenID Connect To support OpenID Connect authentication, you will need to ensure it is supported in the versions of the tooling: - az CLI requires 2.30+ - az PowerShell modules requires 7.0+ - AzureRM terraform provider required 3.22+ ## Resource permissions {#resource-permissions} The final step is to ensure your registered app has permission to work with your Azure resources. 1. In the Azure Portal navigate to **Resource groups** and select the resource group(s) that you want the registered app to access. 1. Next, select the **Access Control (IAM)** option and if your app isn't listed, click **Add**. Select the appropriate role (**Contributor** is a common option) and search for your new application name. Select it from the search results and then click **Save**. Now, you can [add the Service Principal Account in Octopus](#add-service-principal-account). :::div{.hint} Note on roles: Your Service Principal will need to be assigned the *Contributor* role in order to deploy. It will also need the *Reader* role on the subscription itself. ::: ### Note on least privilege {#note_on_least_privilege} In the PowerShell and Permissions example above the service principal is assigned the **Contributor** role on the subscription. This isn't always the best idea, you might want to apply a principle of least privilege to the access the service principal is granted. If that's the case, there are a couple of things worth noting. Firstly, you might want to constrain the service principal to a single resource group, in which case, you just need to assign it the **Contributor** role on the resource group. Next, if you want to get even more granular you can constrain the service principal to a single resource, e.g. a Web App. _In this case, you have to assign the **Contributor** role on the Web App and explicitly assign the **Reader** role on the subscription itself_. The reason behind this has to do with the way Octopus queries for the web app resources in Azure. In order to handle scenarios where [ASEs](/docs/deployments/azure/ase/#resource_groups) are being used, Octopus first queries the resource groups and then queries for the web apps within each resource group. When the service principal is assigned **Contributor** on a resource group it seems to implicitly get **Reader** on the subscription, but this doesn't seem to be the case when **Contributor** is assigned directly to a web app, so you have to assign **Reader** explicitly. ### Create a Microsoft Entra ID App Registration via script {#create-app-registration-via-script} This step shows you how to script the creation of an Microsoft Entra ID App Registration :::div{.hint} During the script, you will be prompted to authenticate with Microsoft Azure. The authenticated user must have administrator permissions in the directory in which the Service Principal is being created. :::
Az CLI ```bash # this script will create a new Microsoft Entra ID App Registration subscription='' # Replace with the name or id of your subscription appName='' # Replace with your app registration name az login az account set --subscription $subscription az ad app create --display-name "$appName" -o table --query "{Id:id,Name:displayName,ClientId:appId}" az account show --query "{Name:name,SubscriptionId:id,TenantId:tenantId}" -o table ```
Az PowerShell ```powershell # this script will create a new Microsoft Entra ID App Registration $AzureTenantId = "2a681dca-3230-4e01-abcb-b1fd225c0982" # Replace with your Tenant Id $AzureSubscriptionName = "YOUR SUBSCRIPTION NAME" # Replace with your subscription name $AzureApplicationName = "YOUR APPLICATION NAME" # Replace with your application name if (Get-Module -Name Az -ListAvailable) { Write-Host "Azure Az Module found." } else { Write-Host "Azure Az Modules not found. Installing the Azure Az PowerShell Modules. You might be prompted that PSGallery is untrusted. If you select Yes your screen might freeze for a second while the modules download process is started." Install-Module -Name Az -AllowClobber -Scope CurrentUser } Write-Host "Loading the Azure Az Module. This may cause the screen to freeze while loading the module." Import-Module -Name Az Write-Host "Logging into Azure" Connect-AzAccount -Tenant $AzureTenantId -Subscription $AzureSubscriptionName $azureSubscription = Get-AzSubscription -SubscriptionName $AzureSubscriptionName $ExistingApplication = Get-AzADApplication -DisplayName "$AzureApplicationName" if ($null -eq $ExistingApplication) { Write-Host "The Microsoft Entra ID Application does not exist, creating Microsoft Entra ID application" $azureAdApplication = New-AzADApplication -DisplayName "$AzureApplicationName" Write-Host "Microsoft Entra ID App Registration successfully created" $AzureApplication = $azureAdApplication } else { Write-Host "The Microsoft Entra ID service principal $AzureApplicationName already exists" $AzureApplication = $ExistingApplication } Write-Host "Important information to know when registering this subscription with Octopus Deploy:" Write-Host " 1) The Microsoft Entra ID Tenant Id is: $AzureTenantId" Write-Host " 2) The Microsoft Azure Subscription Id: $($azureSubscription.SubscriptionId)" Write-Host " 3) The Microsoft Entra ID Application Id: $(AzureApplication.AppId)" ```
### Create a Service Principal Client Secret with PowerShell {#create-a-client-secret-via-script} This step shows you how to create a Service Principal Client Secret with the script below. :::div{.hint} During the script, you will be prompted to authenticate with Microsoft Azure. The authenticated user must have administrator permissions in the directory in which the Service Principal is being created. :::
Az CLI ```bash # This script will create a new client secret for you to use in Octopus Deploy using the Az CLI. subscription='' # Replace with the name or id of your subscription appId='' # Replace id of your application registration expiryYears=1 az login az account set --subscription $subscription az ad app credential reset --append --id $appId --years $expiryYears ```
Az PowerShell ```powershell # This script will create a new client secret for you to use in Octopus Deploy using the Az PowerShell modules. This will work with both PowerShell and PowerShell Core. $AzureTenantId = "2a681dca-3230-4e01-abcb-b1fd225c0982" # Replace with your Tenant Id $AzureSubscriptionName = "YOUR SUBSCRIPTION NAME" # Replace with your subscription name $AzureApplicationName = "YOUR APPLICATION NAME" # Replace with your application name $AzurePasswordEndDays = "365" # Update to change the expiration date of the password if (Get-Module -Name Az -ListAvailable) { Write-Host "Azure Az Module found." } else { Write-Host "Azure Az Modules not found. Installing the Azure Az PowerShell Modules. You might be prompted that PSGallery is untrusted. If you select Yes your screen might freeze for a second while the modules download process is started." Install-Module -Name Az -AllowClobber -Scope CurrentUser } Write-Host "Loading the Azure Az Module. This may cause the screen to freeze while loading the module." Import-Module -Name Az Write-Host "Logging into Microsoft Entra ID" Connect-AzAccount -Tenant $AzureTenantId -Subscription $AzureSubscriptionName $endDate = (Get-Date).AddDays($AzurePasswordEndDays) $azureSubscription = Get-AzSubscription -SubscriptionName $AzureSubscriptionName $ExistingApplication = Get-AzADApplication -DisplayName "$AzureApplicationName" if ($null -eq $ExistingApplication) { Write-host "Unable to find application with name '$AzureApplicationName'" } else { Write-Host "The Microsoft Entra ID service principal $AzureApplicationName already exists, creating a new password for Octopus Deploy to use." $credential = New-Object Microsoft.Azure.PowerShell.Cmdlets.Resources.MSGraph.Models.ApiV10.MicrosoftGraphPasswordCredential $credential.EndDateTime = $endDate $credential.DisplayName = "$AzureApplicationName" $newCredential = New-AzADAppCredential -PasswordCredentials @($credential) -ApplicationId $ExistingApplication.AppId Write-Host "Microsoft Entra ID Service Principal successfully password successfully created." Write-Host "Important information to know when registering this subscription with Octopus Deploy:" Write-Host " 1) The Microsoft Entra ID Tenant Id is: $AzureTenantId" Write-Host " 2) The Microsoft Azure Subscription Id: $($azureSubscription.SubscriptionId)" Write-Host " 3) The Microsoft Entra ID Application Id: $($ExistingApplication.AppId)" Write-Host " 4) The new password is: $($newCredential.SecretText) - this is the only time you'll see this password, please store it in a safe location." } ```
- **Subscription ID**: The ID of the Azure subscription the account will interact with. - **Password**: A secret value created by you. Make sure you record it, as you will need to enter it into Octopus Deploy. - **Tenant ID**: The ID of the Microsoft Entra ID tenant. You can find this in the Azure Portal by navigating to **Microsoft Entra ID ➜ Properties** in the **Tenant ID** field. The Service Principal will default to expiring in 1 year from the time of creation. You can specify the expiry date by adding the *-EndDate* parameter to the *New-AzureRmADApplication* command: ```powershell -EndDate (new-object System.DateTime 2018, 12, 31) ``` Now, you can [add the Service Principal Account in Octopus](#add-service-principal-account). Consider reading our [note on least privilege first](#note_on_least_privilege). ### Create a Service Principal Client Secret with PowerShell {#create-a-client-secret-via-script} This step shows you how to create a Service Principal Client Secret with the script below. :::div{.hint} During the script, you will be prompted to authenticate with Microsoft Azure. The authenticated user must have administrator permissions in the directory in which the Service Principal is being created. :::
Az CLI ```bash # This script will create a new federated credential for you to use in Octopus Deploy using the Az CLI. subscription='' # Replace with the name or id of your subscription appId='' # Replace id of your application registration credential='{ "name": "Testing", "issuer": "https://oidc-client-test.testoctopus.app", "subject": "space:default:project:something", "description": "Testing", "audiences": [ "api://AzureADTokenExchange" ] }' az login az account set --subscription "$subscription" az ad app federated-credential create --id $appId --parameters "$credential" ```
Az PowerShell ```powershell # This script will create a new client secret for you to use in Octopus Deploy using the Az PowerShell modules. This will work with both PowerShell and PowerShell Core. $AzureTenantId = "2a681dca-3230-4e01-abcb-b1fd225c0982" # Replace with your Tenant Id $AzureSubscriptionName = "YOUR SUBSCRIPTION NAME" # Replace with your subscription name $AzureApplicationName = "YOUR APPLICATION NAME" # Replace with your application name $AzurePasswordEndDays = "365" # Update to change the expiration date of the password if (Get-Module -Name Az -ListAvailable) { Write-Host "Azure Az Module found." } else { Write-Host "Azure Az Modules not found. Installing the Azure Az PowerShell Modules. You might be prompted that PSGallery is untrusted. If you select Yes your screen might freeze for a second while the modules download process is started." Install-Module -Name Az -AllowClobber -Scope CurrentUser } Write-Host "Loading the Azure Az Module. This may cause the screen to freeze while loading the module." Import-Module -Name Az Write-Host "Logging into Azure" Connect-AzAccount -Tenant $AzureTenantId -Subscription $AzureSubscriptionName $endDate = (Get-Date).AddDays($AzurePasswordEndDays) $azureSubscription = Get-AzSubscription -SubscriptionName $AzureSubscriptionName $ExistingApplication = Get-AzADApplication -DisplayName "$AzureApplicationName" if ($null -eq $ExistingApplication) { Write-host "Unable to find application with name '$AzureApplicationName'" } else { Write-Host "The Microsoft Entra ID service principal $AzureApplicationName already exists, creating a new password for Octopus Deploy to use." $credential = New-Object Microsoft.Azure.PowerShell.Cmdlets.Resources.MSGraph.Models.ApiV10.MicrosoftGraphPasswordCredential $credential.EndDateTime = $endDate $credential.DisplayName = "$AzureApplicationName" $newCredential = New-AzADAppCredential -PasswordCredentials @($credential) -ApplicationId $ExistingApplication.AppId Write-Host "Microsoft Entra ID Service Principal successfully password successfully created." Write-Host "Important information to know when registering this subscription with Octopus Deploy:" Write-Host " 1) The Microsoft Entra ID Tenant Id is: $AzureTenantId" Write-Host " 2) The Microsoft Azure Subscription Id: $($azureSubscription.SubscriptionId)" Write-Host " 3) The Microsoft Entra ID Application Id: $($ExistingApplication.AppId)" Write-Host " 4) The new password is: $($newCredential.SecretText) - this is the only time you'll see this password, please store it in a safe location." } ```
- **Subscription ID**: The ID of the Microsoft Azure subscription the account will interact with. - **Password**: A secret value created by you. Make sure you record it, as you will need to enter it into Octopus Deploy. - **Tenant ID**: The ID of the Active Directory tenant. You can find this in the Azure Portal by navigating to **Microsoft Entra ID ➜ Properties** in the **Tenant ID** field. The Service Principal will default to expiring in 1 year from the time of creation. You can specify the expiry date by adding the *-EndDate* parameter to the *New-AzureRmADApplication* command: ```powershell -EndDate (new-object System.DateTime 2018, 12, 31) ``` Now, you can [add the Service Principal Account in Octopus](#add-service-principal-account). Consider reading our [note on least privilege first](#note_on_least_privilege). ## Add the Service Principal account in Octopus {#add-service-principal-account} Now that you have the following values, you can add your account to Octopus: - Subscription ID - Application ID - Tenant ID - Application Password/Key 1. Navigate to **Deploy ➜ Manage ➜ Accounts**. 1. Select **ADD ACCOUNT ➜ Azure Subscriptions**. 1. Give the account the name you want it to be known by in Octopus. 1. Give the account a description. 1. Add your Azure Subscription ID. This is found in the Azure portal under **Subscriptions**. 1. Add the **Application ID**, the **Tenant ID**, and the **Application Password/Keyword**. Click **SAVE AND TEST** to confirm the account can interact with Azure. Octopus will then attempt to use the account credentials to access the Azure Resource Management (ARM) API and list the Resource Groups in that subscription. You may need to include the appropriate IP Addresses for the Azure Data Center you are targeting in any firewall allow list. See [deploying to Azure via a Firewall](/docs/deployments/azure) for more details. :::div{.hint} A newly created Service Principal may take several minutes before the credential test passes. If you have double-checked your credential values, wait 15 minutes and try again. ::: ## Creating an Azure Management Certificate account {#azure-management-certificate} Azure Management Certificate Accounts work with the **Azure Service Management API** only, which is used when Octopus deploys [Cloud Services](/docs/deployments/azure/cloud-services/) and [Azure Web Apps](/docs/deployments/azure/deploying-a-package-to-an-azure-web-app). :::div{.warning} The Azure Service Management APIs are being deprecated by Microsoft. See [this blog post](https://octopus.com/blog/azure-management-certs). The instructions below only exist for legacy purposes. ::: To create an Azure Management Certificate account as part of adding an [Azure subscription](#adding-azure-subscription), select Management Certificate as the Authentication Method. ### Step 1: Management Certificate {#CreatingAnAzureManagementCertificateAccount-Step2-ManagementCertificate} When using **Management Certificate**, Octopus authenticates with Azure using an X.509 certificate. You can either upload an existing certificate (`.pfx`), or leave the field blank and Octopus will generate a certificate. Keep in mind that since Octopus securely stores the certificate internally, there is no need to upload a password protected `.pfx` file. If you would like to use one that is password protected, you will need to first remove the password. This can be done with the following commands. **Remove .pfx password** ```powershell openssl pkcs12 -in AzureCert.pfx -password pass:MySecret -nodes -out temp.pem openssl pkcs12 -export -in temp.pem -passout pass: -out PasswordFreeAzureCert.pfx del temp.pem ``` If Octopus generates your certificate, you need to upload the certificate to the Azure Management Portal. After clicking **Save**, the Account settings page provides instructions for downloading the certificate public-key from Octopus and uploading it into the Azure Management Portal. Uploaded certificates can be viewed on the 'Management Certificates' tab of the 'Settings' page in the Azure Portal. The certificate will be named **Octopus Deploy -``{Your Account Name}**. ### Step 2: Save and Test {#CreatingAnAzureManagementCertificateAccount-Step3-SaveAndTest} Click **Save and Test**, and Octopus will attempt to use the account credentials to access the Azure Service Management (ASM) API and list the Hosted Services in that subscription. You may need to include the appropriate IP Addresses for the Azure Data Center you are targeting in any firewall allow list. See [deploying to Azure via a Firewall](/docs/deployments/azure) for more details. You can now configure Octopus to deploy to Azure via the Azure Service Management (ASM) API. ## Azure account variables {#azure-account-variables} You can access your Azure account from within projects through a variable of type **Azure Account**. Learn more about [Azure Account Variables](/docs/projects/variables/azure-account-variables/) and [Azure Deployments](/docs/deployments/azure). ## Automate Azure Service Principal creation and Octopus Deploy account registration {#azure-octopus-account-automate-creation} The above scripts / steps can result in a lot of clicking back and forth. Below is a script that will do the following: 1. Create an Azure Service Principal 1. Assign that Service Principal the role of `contributor` to the desired subscription. 1. Register that Service Principal and subscription in Octopus Deploy. While parameters are present, they are not required. You will be prompted for each parameter while the script runs. This script is designed to run multiple times. ```powershell # None of these parameters are required to start the script, you will be prompted at each stage to enter any values missing. param ( $OctopusURL, $OctopusApiKey, $OctopusSpaceName, $OctopusAccountName, $OctopusEnvironmentList, $OctopusTenantList, $AzureTenantId, $AzureSubscriptionName, $AzureServicePrincipalName, $AzureServicePrincipalPasswordEndDays ) $ErrorActionPreference = "Stop" function Write-OctopusSuccess { param($message) Write-Host $message -ForegroundColor Green } function Write-OctopusWarning { param($message) Write-Host $message -ForegroundColor Red } function Write-OctopusVerbose { param($message) Write-Host $message -ForegroundColor White } function Get-ParameterValue { param ( $originalParameterValue, $parameterName ) if ($null -ne $originalParameterValue -and [string]::IsNullOrWhiteSpace($originalParameterValue) -eq $false) { return $originalParameterValue } return Read-Host -Prompt "Please enter a value for $parameterName" } function Get-ParameterValueWithDefault { param ( $originalParameterValue, $parameterName, $defaultValue ) $returnValue = Get-ParameterValue -originalParameterValue $originalParameterValue -parameterName $parameterName if ([string]::IsNullOrWhiteSpace($returnValue) -eq $true) { return $defaultValue } return $returnValue } function Invoke-OctopusApi { param ( $EndPoint, $SpaceId, $OctopusURL, $apiKey, $method, $item ) $url = "$OctopusUrl/api/$spaceId/$EndPoint" if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$OctopusUrl/api/$EndPoint" } if ($null -eq $EndPoint -and $null -eq $SpaceId) { $url = "$OctopusUrl/api" } if ($null -eq $item) { Write-OctopusVerbose "Invoking GET $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' } $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } function Get-OctopusItemByName { param ( $ItemList, $ItemName ) return ($ItemList | Where-Object {$_.Name -eq $ItemName}) } function Import-AzurePowerShellModules { if (Get-Module -Name Az -ListAvailable) { Write-OctopusVerbose "Azure Az Module found." } else { Write-OctopusVerbose "Azure Az Modules not found. Installing the Azure Az PowerShell Modules. You might be prompted that PSGallery is untrusted. If you select Yes your screen might freeze for a second while the modules download process is started." Install-Module -Name Az -AllowClobber -Scope CurrentUser } Write-OctopusVerbose "Loading the Azure Az Module. This may cause the screen to freeze while loading the module." Import-Module -Name Az } function Get-OctopusSpaceInformation { param ( $OctopusApiKey, $OctopusUrl, $OctopusSpaceName ) Write-OctopusVerbose "Testing the API credentials of the credentials supplied by pulling the space information" $spaceResults = Invoke-OctopusApi -EndPoint "spaces?skip=0&take=100000" -SpaceId $null -OctopusURL $OctopusURL -apiKey $OctopusApiKey -method "Get" -item $null $spaceInfo = Get-OctopusItemByName -ItemList $spaceResults.Items -ItemName $OctopusSpaceName if ($null -ne $spaceInfo -and $null -ne $spaceInfo.Id) { Write-OctopusSuccess "Successfully connected to the Octopus Deploy instance provided. The space id for $OctopusSpaceName is $($spaceInfo.Id)" return $spaceInfo } else { Write-OctopusWarning "Unable to connect to $OctopusUrl. Please check your credentials and try again." exit 1 } } function Test-ExistingOctopusAccountWorksWithAzure { param ( $OctopusApiKey, $OctopusUrl, $SpaceInfo, $ExistingAccount ) Write-OctopusVerbose "The account already exists in Octopus Deploy. Running a test to ensure it can connect to Azure." $testAccountTaskBody = @{ "Name" = "TestAccount" "Description" = "Test Azure account" "SpaceId" = $spaceInfo.Id "Arguments" = @{ "AccountId" = $existingAccount.Id } } $checkConnectivityTask = Invoke-OctopusApi -EndPoint "tasks" -SpaceId $null -OctopusURL $OctopusURL -apiKey $OctopusApiKey -method "POST" -item $testAccountTaskBody $taskStatusEndPoint = "tasks/$($checkConnectivityTask.Id)" $taskState = $checkConnectivityTask.State $taskDone = $taskState -eq "Success" -or $taskState -eq "Canceled" -or $taskState -eq "Failed" While ($taskDone -eq $false) { Write-OctopusVerbose "Checking on the status of the task in 3 seconds" Start-Sleep -Seconds 3 $taskStatus = Invoke-OctopusApi -EndPoint $taskStatusEndPoint -SpaceId $null -OctopusURL $OctopusURL -apiKey $OctopusApiKey -method "GET" $taskState = $taskStatus.State Write-Host "The task status is $taskState" $taskDone = $taskState -eq "Success" -or $taskState -eq "Canceled" -or $taskState -eq "Failed" if ($taskState -eq "Success") { Write-OctopusSuccess "The Octopus Account can successfully connect to Azure" return $true } } return $false } function New-OctopusIdList { param ( $OctopusUrl, $OctopusApiKey, $spaceInfo, $endPoint, $itemName, $itemParameter ) Write-OctopusVerbose "Checking to see if Octopus Deploy instance has $itemName" $allItemsList = Invoke-OctopusApi -EndPoint "$($endPoint)?skip=0&take=100000" -method "Get" -SpaceId $spaceInfo.Id -OctopusURL $OctopusUrl -apiKey $OctopusApiKey $IdList = @() if ($allItemsList.Items.Count -le 0) { return $IdList } Write-OctopusVerbose "$itemName records found." $itemFilter = Get-ParameterValue -originalParameterValue $itemParameter -parameterName "a comma-separated list of $itemName you'd like to associate the account to. If left blank the account can be used for all $itemName." if ([string]::IsNullOrWhiteSpace($itemFilter) -eq $true) { return $IdList } $itemList = $itemFilter -split "," foreach ($item in $itemList) { $foundItem = Get-OctopusItemByName -ItemList $allItemsList.Items -ItemName $item if ($null -eq $foundItem) { Write-OctopusWarning "The $itemName $item was not found in your Octopus Deploy instance." $continue = Read-Host -Prompt "Would you like to continue? If yes, the account will not be tied to $itemName $item. y/n" if ($continue.ToLower() -ne "y") { exit } } else { $IdList += $foundItem.Id } } return $IdList } Write-OctopusVerbose "This script will do the following:" Write-OctopusVerbose " 1) In Azure: create an Azure Service Principal and associate it with your desired subscription as a contributor. The password generated is two GUIDs without dashes." Write-OctopusVerbose " 2) In Octopus Deploy: create an Azure Account using the credentials created in step 1" Write-OctopusVerbose "For this to work you will need to have the following installed. If it is not installed, then this script will it install it for you from the PowerShell Gallery." Write-OctopusVerbose " 1) Azure Az PowerShell Modules" $answer = Read-Host -Prompt "Do you wish to continue? y/n" if ($answer.ToLower() -ne "y") { Write-OctopusWarning "You have chosen not to continue. Stopping script" Exit } Import-AzurePowerShellModules $OctopusURL = Get-ParameterValue -originalParameterValue $OctopusURL -parameterName "the URL of your Octopus Deploy Instance, example: https://samples.octopus.com" $OctopusApiKey = Get-ParameterValue -originalParameterValue $OctopusApiKey -parameterName "the API Key of your Octopus Deploy User. See https://octopus.com/docs/octopus-rest-api/how-to-create-an-api-key for a guide on how to create one" $OctopusSpaceName = Get-ParameterValueWithDefault -originalParameterValue $OctopusSpaceName -parameterName "the name of the space in Octopus Deploy. If left empty it will default to 'Default'" -defaultValue "Default" $OctopusAccountName = Get-ParameterValueWithDefault -originalParameterValue $OctopusAccountName -parameterName "the name of the account you wish to create in Octopus Deploy. If left empty it will default to 'Bootstrap Azure Account'" -defaultValue "Bootstrap Azure Account" $spaceInfo = Get-OctopusSpaceInformation -OctopusApiKey $OctopusApiKey -OctopusUrl $OctopusURL -OctopusSpaceName $OctopusSpaceName Write-OctopusVerbose "Getting the list of accounts on that space in Octopus Deploy to see if it exists" $existingOctopusAccounts = Invoke-OctopusApi -EndPoint "accounts?skip=0&take=1000000" -method "GET" -SpaceId $spaceInfo.Id -apiKey $OctopusApiKey -OctopusURL $OctopusURL $existingAccount = Get-OctopusItemByName -ItemList $existingOctopusAccounts.Items -ItemName $OctopusAccountName $OctopusAndAzureServicePrincipalAlreadyExist = $false $OctopusEnvironmentIdList = @() $OctopusTenantIdList = @() if ($null -ne $existingAccount) { $OctopusAndAzureServicePrincipalAlreadyExist = Test-ExistingOctopusAccountWorksWithAzure -OctopusApiKey $OctopusApiKey -OctopusUrl $OctopusURL -SpaceInfo $spaceInfo -ExistingAccount $existingAccount } else { Write-OctopusWarning "The account $OctopusAccountName does not exist. After creating the Azure Account it will create a new account in Octopus Deploy" Write-OctopusVerbose "Octopus accounts can be locked down to specific environments, tenants and tenant tags" $OctopusEnvironmentIdList = New-OctopusIdList -OctopusUrl $OctopusURL -OctopusApiKey $OctopusApiKey -spaceInfo $spaceInfo -endPoint "environments" -itemName "environments" -itemParameter $OctopusEnvironmentList $OctopusTenantIdList = New-OctopusIdList -OctopusUrl $OctopusURL -OctopusApiKey $OctopusApiKey -spaceInfo $spaceInfo -endPoint "tenants" -itemName "tenants" -itemParameter $OctopusTenantList } if ($OctopusAndAzureServicePrincipalAlreadyExist -eq $true) { $overwriteExisting = Read-Host -Prompt "Octopus Deploy already has a working connection with Azure. Do you wish to continue? This will create a new password for the service principal account in Azure and update the account in Octopus Deploy. y/n" If ($overwriteExisting.ToLower() -ne "y") { Write-OctopusSuccess "Octopus Deploy already has a working connection and you elected to leave it as as is, stopping script." exit } } $AzureTenantId = Get-ParameterValue -originalParameterValue $AzureTenantId -parameterName "the ID (GUID) of the Azure tenant you wish to connect to. See https://microsoft.github.io/AzureTipsAndTricks/blog/tip153.html on how to get that id" $AzureSubscriptionName = Get-ParameterValue -originalParameterValue $AzureSubscriptionName -parameterName "the name of the subscription you wish to connect Octopus Deploy to" $AzureServicePrincipalName = Get-ParameterValue -originalParameterValue $AzureServicePrincipalName -parameterName "the name of the service principal you wish to create in Azure" Write-OctopusVerbose "Logging into Azure" Connect-AzAccount -Tenant $AzureTenantId -Subscription $AzureSubscriptionName Write-OctopusVerbose "Auto-generating new password" $AzureServicePrincipalPasswordEndDays = Get-ParameterValue -originalParameterValue $AzureServicePrincipalPasswordEndDays -parameterName "the number of days you want the service principal password to be active" $password = "$(New-Guid)$(New-Guid)" -replace "-", "" $securePassword = ConvertTo-SecureString $password -AsPlainText -Force $endDate = (Get-Date).AddDays($AzureServicePrincipalPasswordEndDays) $azureSubscription = Get-AzSubscription -SubscriptionName $AzureSubscriptionName $azureSubscription | Format-Table $ExistingApplication = Get-AzADApplication -DisplayName "$AzureServicePrincipalName" $ExistingApplication | Format-Table if ($null -eq $ExistingApplication) { Write-OctopusVerbose "The Microsoft Entra ID Application does not exist, creating Microsoft Entra ID application" $azureAdApplication = New-AzADApplication -DisplayName "$AzureServicePrincipalName" -HomePage "http://octopus.com" -IdentifierUris "http://octopus.com/$($AzureServicePrincipalName)" -Password $securePassword -EndDate $endDate $azureAdApplication | Format-Table Write-OctopusVerbose "Creating Microsoft Entra ID service principal" $servicePrincipal = New-AzADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId $servicePrincipal | Format-Table Write-OctopusSuccess "Azure Service Principal successfully created" $AzureApplicationId = $azureAdApplication.ApplicationId } else { Write-OctopusVerbose "The Microsoft Entra ID service principal $AzureServicePrincipalName already exists, creating a new password for Octopus Deploy to use." New-AzADAppCredential -DisplayName "$AzureServicePrincipalName" -Password $securePassword -EndDate $endDate Write-OctopusSuccess "Microsoft Entra ID Service Principal successfully password successfully created." $AzureApplicationId = $ExistingApplication.ApplicationId } if ($null -eq $existingAccount) { Write-OctopusVerbose "Now creating the account in Octopus Deploy." $tenantParticipation = "Untenanted" if ($OctopusTenantIdList.Count -gt 0) { $tenantParticipation = "TenantedOrUntenanted" } $jsonPayload = @{ AccountType = "AzureServicePrincipal" AzureEnvironment = "" SubscriptionNumber = $azureSubscription.Id Password = @{ HasValue = $true NewValue = $password } TenantId = $AzureTenantId ClientId = $AzureApplicationId ActiveDirectoryEndpointBaseUri = "" ResourceManagementEndpointBaseUri = "" Name = $OctopusAccountName Description = "Account created by the bootstrap script" TenantedDeploymentParticipation = $tenantParticipation TenantTags = @() TenantIds = @($OctopusTenantIdList) EnvironmentIds = @($OctopusEnvironmentIdList) } Write-OctopusVerbose "Adding Microsoft Entra ID Service Principal that was just created to Octopus Deploy" Invoke-OctopusApi -EndPoint "accounts" -item $jsonPayload -method "POST" -SpaceId $spaceInfo.Id -apiKey $OctopusApiKey -OctopusURL $OctopusURL Write-OctopusSuccess "Successfully added the Microsoft Entra ID Service Principal account to Octopus Deploy" } else { $existingAccount.Password.HasValue = $true $existingAccount.Password.NewValue = $password Write-OctopusVerbose "Updating the existing account in Octopus Deploy to use the new service principal credentials" Invoke-OctopusApi -EndPoint "accounts/$($existingAccount.Id)" -item $existingAccount -method "PUT" -SpaceId $spaceInfo.Id -apiKey $OctopusApiKey -OctopusURL $OctopusUrl Write-OctopusSuccess "Successfully updated Azure Service Principal account in Octopus Deploy" } Write-OctopusSuccess "Important information to know for future usage:" Write-OctopusVerbose " 1) The Microsoft Entra ID Tenant Id is: $AzureTenantId" Write-OctopusVerbose " 2) The Microsoft Azure Subscription Id: $($azureSubscription.SubscriptionId)" Write-OctopusVerbose " 3) The Microsoft Entra ID Application Id: $AzureApplicationId" Write-OctopusVerbose " 4) The new password is: $password - this is the only time you'll see this password, please store it in a safe location." ``` # Deployment targets Source: https://octopus.com/docs/infrastructure/deployment-targets.md Locations that software may be deployed to and run can include: - Containers running in clusters - Cloud-managed services, platforms, or serverless functions - Virtual machines or infrastructure as a service - Self-managed or managed servers - Point-of-sale devices in retail stores - Medical devices in hospitals All these places are different kinds of deployment targets. A deployment target is a location that will host your software. We've used the term _deployment target_ as this could refer to many different destinations, such as: - Kubernetes clusters - Cloud apps or services - Cloud storage - Windows or Linux servers - On-premises machines - Serverless functions - SSH connections Deployment targets define where your software is deployed, and they also serve as the basis for Day-2 tasks and additional operational tasks that can be managed by Octopus. Deployments to Kubernetes clusters are performed by a lightweight agent that runs on the cluster. If you’re deploying to a Windows or Linux server or virtual machine, your deployment target will run a lightweight agent called a Tentacle. For cloud services, such as Amazon ECS or Azure Web Apps, the deployment is made through a worker instead. You select the target type when you add it to Octopus. Based on the type, you'll be prompted to set up the appropriate connection using a simple form. ## Learn more - [How Octopus counts deployment targets](/docs/infrastructure/deployment-targets/how-octopus-counts-deployment-targets) - [Adding deployment targets](/docs/getting-started/first-deployment/add-deployment-targets) # Azure Service Fabric cluster targets Source: https://octopus.com/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets.md Azure Service Fabric Cluster deployment targets let you reference existing Service Fabric Cluster apps that are available in your Azure subscription. You can then reference these by [target tag](/docs/infrastructure/deployment-targets/target-tags) during deployments. ## Requirements 1. The **Service Fabric SDK** must be installed on the Octopus Server. For details, see [Service Fabric SDK](https://oc.to/ServiceFabricSdkDownload). If this SDK is missing, the step will fail with an error: _"Could not find the Azure Service Fabric SDK on this server."_ 2. The **PowerShell script execution** may also need to be enabled. For details see [Enable PowerShell script execution](https://oc.to/ServiceFabricEnableScriptExection). After the above SDK has been installed, you will need to restart your Octopus Server for the changes to take effect. You need to create a Service Fabric cluster (either in Azure, on-premises, or in other clouds). Octopus needs an existing Service Fabric cluster to connect to in order to reference it as a deployment target. To learn about building Azure Service Fabric apps see the [Service Fabric documentation](https://azure.microsoft.com/en-au/services/service-fabric/). ## Creating Service Fabric cluster targets Once you have a Service Fabric Cluster application setup within your Azure subscription, you are ready to map that to an Octopus deployment target. To create an Azure Service Fabric Cluster target within Octopus: - Navigate to **Infrastructure ➜ Deployment Targets ➜ Add Deployment Target**. - Select **Azure Service Fabric Cluster** from the list of available targets and click _Next_. - Fill out the necessary fields, being sure to provide a unique target tag that clearly identifies your Azure Service Fabric Cluster target. :::figure ![](/docs/img/infrastructure/deployment-targets/azure/service-fabric-cluster-targets/create-azure-service-fabric-cluster-target.png) ::: - After clicking _Save_ your deployment target will be added and a health check task will run to ensure Octopus can connect to the target. - If all goes well, you should see your newly created target in your **Deployment Targets** list with a status of _Healthy_. ## Troubleshooting If your Azure Service Fabric Cluster target does not successfully complete a health check, you may need to check that your Octopus Server can communicate with Azure. If your Octopus Server is behind a proxy or firewall, you will need to consult your Systems Administrator to ensure it can communicate with Azure. Alternatively, it could be the security settings of your Service Fabric Cluster denying access. Our deployments documentation discusses [the various security modes of Service Fabric](/docs/deployments/azure/service-fabric/#security-modes) in greater detail. ## Deploying to Service Fabric targets To learn about deploying to Service Fabric targets, see our [documentation about this topic](/docs/deployments/azure/service-fabric). # Create AWS account command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/aws-accounts.md ## AWS account Command: **_New-OctopusAwsAccount_** _**New-OctopusAwsAccount** allows you to create an AWS account in Octopus from within a running deployment_ | Parameters | Value | |-------------------------------|------------------------------------------------------------------------------------------------------------| | `-name` | Name for the AWS account | | `-secretKey` | The AWS secret key to use when authenticating against Amazon Web Services. | | `-accessKey` | The AWS access key to use when authenticating against Amazon Web Services. | | `-updateIfExisting` | Will update an existing account with the same name, create if it doesn't exist | Example: ```powershell New-OctopusAwsAccount -name "My AWS Account" ` -secretKey "7U4MhdfjgcAk9niwPgXD81pTYY+fIvVsN3m" ` -accessKey "AKIAVY29QTUTKPJC3R5K" ` -updateIfExisting ``` # Create Azure Service Principal account command in Octopus Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/azure-accounts.md ## Azure Service Principal account Command: **_New-OctopusAzureServicePrincipalAccount_** _**New-OctopusAzureServicePrincipalAccount** allows you to create an Azure Service Principal account in Octopus from within a running deployment_ | Parameters | Value | |-------------------------------|------------------------------------------------------------------------------------------------------------| | `-name` | Name for the Azure Service Principal account | | `-azureSubscription` | GUID Id of the Azure Subscription | | `-azureApplicationId` | GUID Id of the Microsoft Entra ID Application | | `-azureTenantId` | GUID Id of the Microsoft Entra ID Tenant | | `-azurePassword` | Microsoft Entra ID Password | | `-azureEnvironment` | Azure Environment Identifier, see [Azure Environment Options](#azure-environment-options) below | | `-azureBaseUri` | Azure Base Login URI, see [Azure Environment Options](#azure-environment-options) below | | `-azureResourceManagementUri` | Azure Resource Management URI, see [Azure Environment Options](#azure-environment-options) below | | `-updateIfExisting` | Will update an existing account with the same name, create if it doesn't exist | Example: ```powershell # Targeting the Azure Global Cloud New-OctopusAzureServicePrincipalAccount -name "My Azure Account" ` -azureSubscription "dea39b53-1ac8-4adc-b291-a44b205921af" ` -azureApplicationId "f83ece42-857d-44ed-9652-0765af7fa7d4" ` -azureTenantId "e91671b4-a676-4cb6-8ff8-69fcb8e048d6" ` -azurePassword "correct horse battery staple" ` -updateIfExisting # Targeting an isolated Cloud, e.g AzureGermanCloud New-OctopusAzureServicePrincipalAccount -name "My Azure Account" ` -azureSubscription "dea39b53-1ac8-4adc-b291-a44b205921af" ` -azureApplicationId "f83ece42-857d-44ed-9652-0765af7fa7d4" ` -azureTenantId "e91671b4-a676-4cb6-8ff8-69fcb8e048d6" ` -azurePassword "correct horse battery staple" ` -azureEnvironment "AzureGermanCloud" ` -azureBaseUri "https://login.microsoftonline.de/" ` -azureResourceManagementBaseUri "https://management.microsoftazure.de/" ``` ### Azure environment options The valid options for `-azureEnvironment` are available via the following command: ```powershell Get-AzureRmEnvironment | Select-Object -Property Name,ActiveDirectoryAuthority,ResourceManagerUrl ``` Valid Azure cloud names are: - AzureChina - AzureCloud - AzureGermanCloud - AzureUSGovernment # Create token account command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/token-accounts.md ## Token account Command: **_New-OctopusTokenAccount_** _**New-OctopusTokenAccount** allows you to create a Token account in Octopus from within a running deployment_ | Parameters | Value | |-------------------------------|------------------------------------------------------------------------------------------------------------| | `-name` | Name for the Token account | | `-token` | The password to use to when authenticating against the remote host. | | `-updateIfExisting` | Will update an existing account with the same name, create if it doesn't exist | Example: ```powershell New-OctopusTokenAccount -name "My Token Account" ` -token "dea39b531ac84adcb291a44b205921af" ` -updateIfExisting ``` # Create username/password command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/username-password-accounts.md ## Username/password account Command: **_New-OctopusUserPassAccount_** _**New-OctopusUserPassAccount** allows you to create a username/password account in Octopus from within a running deployment_ | Parameters | Value | |-------------------------------|------------------------------------------------------------------------------------------------------------| | `-name` | Name for the Username/Password account | | `-username` | The username to use when authenticating against the remote host. | | `-password` | The password to use to when authenticating against the remote host. | | `-updateIfExisting` | Will update an existing account with the same name, create if it doesn't exist | Example: ```powershell New-OctopusUserPassAccount -name "My Username Password Account" ` -username "myuser" ` -password "correct horse battery staple" ` -updateIfExisting ``` # How Octopus counts deployment targets Source: https://octopus.com/docs/infrastructure/deployment-targets/how-octopus-counts-deployment-targets.md Octopus Deploy counts deployment targets based on your license type. New customers and renewals will be on a PTM license. Previously, we offered a per-target license: - PTM license (projects/tenants/machines) – Has a project limit with optional tenant and machine limits in the license key. Sold from Q4 2023 onward. - Per target license - Only has a target limit, with no project or tenant limits in the license key. Sold between 2017 and 2024. From May 1, 2024, per target licenses are no longer available. **Please note:** If you’re unsure of your license type, please refer to your invoice or contact [sales@octopus.com](mailto:sales@octopus.com). | Deployment target |
Machine Count Against License (PTM)
|
Target Count Against License (Per-Target)
| Important Note | | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------ | ------------------------------------------------------------------------ | -------------- | | [Windows Server running a Tentacle](/docs/infrastructure/deployment-targets/tentacle/windows) | 1 machine per Tentacle instance | 1 target per Tentacle instance | Listening Tentacles registered multiple times on the same instance will count as one (1) target. | | [Linux Server running a Tentacle](/docs/infrastructure/deployment-targets/tentacle/linux) | 1 machine per Tentacle instance | 1 target per Tentacle instance | Listening Tentacles registered multiple times on the same instance will count as one (1) target. | | [SSH Connection](/docs/infrastructure/deployment-targets/linux/ssh-target) | 1 machine per SSH Connection | 1 target per SSH Connection | | | [Kubernetes agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) | 0 machines per Kubernetes Agent | 1 target per Agent instance | | | [Kubernetes Cluster](/docs/kubernetes/targets/kubernetes-api) | 0 machines per Kubernetes Cluster | 1 target per Kubernetes Namespace | A namespace is required when registering the Kubernetes cluster with Octopus. By default, the namespace used in the registration is used in health checks and deployments. The namespace can be overwritten in the deployment process. | | [AWS ECS Cluster](/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target) | 0 machines per ECS Cluster | 1 target per ECS Cluster | | | [Azure Web App / Function / Cloud Service](/docs/infrastructure/deployment-targets/azure/web-app-targets) | 0 machines per Web App / Function | 1 target per Web App / Function | This represents how Octopus _currently_ counts Azure Web Apps / Functions. However, one (1) Azure Web App / Function is not equal to one (1) Linux server. | | [Azure Service Fabric Cluster](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets) | 0 machines per cluster | 1 target per cluster | | | [Offline Package Drops](/docs/infrastructure/deployment-targets/offline-package-drop) | 0 machines per offline package drop | 1 target per offline package drop | | | [Cloud Region](/docs/infrastructure/deployment-targets/cloud-regions) | 0 machines per cloud region | 0 target per cloud region | Cloud regions are legacy targets that pre-dated workers as a mechanism to run scripts on cloud providers. They are used today to execute scripts multiple times with variables scoped for each iteration. | **Please note:** Octopus only counts Windows servers, Linux servers, ECS clusters, Kubernetes clusters, etc., registered with an Octopus Deploy instance. If you have 5,000 Linux servers, and 4,000 are registered with Octopus, then Octopus only counts those 4,000 against your license. ## Adding deployment targets You add deployment targets in different ways, depending on the type of target and how the target will communicate with the Octopus Server. For instructions, see: - [Listening and Polling Windows Tentacles](/docs/infrastructure/deployment-targets/tentacle/windows) - [Linux SSH connection](/docs/infrastructure/deployment-targets/linux/ssh-target) - [Linux Tentacle](/docs/infrastructure/deployment-targets/tentacle/linux) - [Azure Web App](/docs/infrastructure/deployment-targets/azure/web-app-targets) - [Azure Cloud Service](/docs/infrastructure/deployment-targets/azure/cloud-service-targets) - [Azure Service Fabric cluster](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets) - [AWS](/docs/infrastructure/accounts/aws) - [Kubernetes agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) - [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) - [Offline package drop](/docs/infrastructure/deployment-targets/offline-package-drop) - [Cloud regions](/docs/infrastructure/deployment-targets/cloud-regions) ## Learn more - Target tags - Dynamic infrastructure - Machine policies - Proxy support # SSH target Source: https://octopus.com/docs/infrastructure/deployment-targets/linux/ssh-target.md The Octopus Server can communicate with Linux targets via SSH. When using SSH for deployments to a Linux server, the Tentacle agent is not required and doesn't need to be installed. ## Configuring SSH targets Before you configure an SSH deployment target, review the SSH target [requirements](/docs/infrastructure/deployment-targets/linux/ssh-requirements) and ensure your SSH deployment targets have the required packages installed. ## Create an SSH account The SSH connection you configure will use an account with either an [SSH Key Pair](/docs/infrastructure/accounts/ssh-key-pair/) or a [Username and Password](/docs/infrastructure/accounts/username-and-password) that has access to the remote host. See [accounts](/docs/infrastructure/accounts/ssh-key-pair) for instructions to configure the account. ## Add an SSH connection 1. In the **Octopus Web Portal**, navigate to the **Infrastructure** tab, select **Deployment Targets** and click **ADD DEPLOYMENT TARGET**. 2. Choose either **LINUX** or **MAC** and click **ADD** on the SSH Connection card. 3. Enter the DNS or IP address of the deployment target, i.e., `example.com` or `10.0.1.23`. 4. Enter the port (port 22 by default) and click **NEXT**. Make sure the target server is accessible by the port you specify. The Octopus Server will attempt to perform the required protocol handshakes and obtain the remote endpoint's public key fingerprint automatically rather than have you enter it manually. This fingerprint is stored and verified by the server on all subsequent connections. If this discovery process is not successful, you will need to click **ENTER DETAILS MANUALLY**. 5. Give the target a name. 6. Select which environment the deployment target will be assigned to. 7. Choose or create at least one target role for the deployment target and click **Save**. Learn about [target tags](/docs/infrastructure/deployment-targets/target-tags). 8. Select the account that will be used for the Octopus Server and the SSH target to communicate. 9. If entering the details manually, enter the **Host**, **Port** and the host's fingerprint. :::div{.hint} From Octopus Server **2024.2.6856** both SHA256 and MD5 fingerprints are supported. We recommend using SHA256 fingerprints. ::: You can retrieve the fingerprint of the default key configured in your sshd\_config file from the target server with the following command: ```bash ssh-keygen -E sha256 -lf /etc/ssh/ssh_host_ed25519_key.pub | awk '{ print $2 }' ``` For Octopus Server prior to **2024.2.6856** use the following: ```bash ssh-keygen -E md5 -lf /etc/ssh/ssh_host_ed25519_key.pub | awk '{ print $2 }' | cut -d':' -f2- ``` 10. Select the **Platform** (OS and architecture) of the target server. 11. Click **Save**. ## Health check Once the target is configured, Octopus will perform an initial health check. Health checks are done periodically or on demand and ensure the endpoint is reachable, configured correctly and the required dependencies are available (e.g. tar, for more details see [requirements](/docs/infrastructure/deployment-targets/linux/ssh-requirements), and ready to perform deployment tasks. If Calamari is not present or is out-of-date, a warning will be displayed, however, Calamari will be updated when it is next required by a task. If the SSH target is healthy, the version that is displayed is the version of the Octopus Server instance. If the fingerprint changes after initial configuration, the next health check will update the fingerprint. If the fingerprint returned during the handshake is different to the value stored in the database, the new fingerprint will show up in the logs. If you aren't expecting a change and you see this error it could mean you have been compromised! Learn more about health checks and [machine policies](/docs/infrastructure/deployment-targets/machine-policies) ## Running scripts on SSH endpoints You can use [raw scripting](/docs/deployments/custom-scripts/raw-scripting/) to run scripts on SSH endpoints without any additional Octopus dependencies. You can set [machine policies](/docs/infrastructure/deployment-targets/machine-policies) to configure health checks that only test for SSH connectivity for the target to be considered healthy. ## Learn more - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # Troubleshooting Listening Tentacles Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/troubleshooting/troubleshooting-listening.md ## Communication settings To verify the communication settings, *On the Tentacle machine*, open the Tentacle Manager application from the Start screen or Start menu. 1. Ensure that the Tentacle is in *listening* mode. Below the thumbprint, you should see the text *This Tentacles listens for connections on port 10933*. 2. Check the port that the Tentacle listens on. 3. Check that the **Octopus Server** thumbprint shown in light gray in the Tentacle manager matches the one shown in the **Configuration ➜ Thumbprints** screen in the Octopus Web Portal. Note, that there are two thumbprints displayed - that of the Tentacle itself (shown first in bold) and the thumbprints of trusted servers (shown inline in the gray text). If any of the communications settings are incorrect, choose *Delete this Tentacle instance...*. After doing so, you'll be presented with the Tentacle installation wizard, where the correct settings can be chosen. If the settings are correct, continue to next step. ## Check the connections To help with diagnostics, we've included a welcome page you can connect to from your web browser. When you conduct these checks: - If you're presented with a prompt to "confirm a certificate" or "select a certificate" choose "Cancel" - don't provide one. - If you're presented with a warning about the invalidity of the site's certificate, "continue to the site" or "add an exception" (Octopus Server uses a self-signed certificate by default). *On the Tentacle machine*, open a web browser and navigate to `https://localhost:10933` (or your chosen Tentacle communications port if it isn't the default). Make sure you use an **HTTPS** URL is used. The page shown should look like the one below. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/images/3278074.png) ::: :::div{.hint} **If you can't browse to the page...** If this is where your journey ends, there's a problem on the Tentacle machine. It is very likely that the Tentacle is unable to open the communications port, either because of permissions, or because another process is listening on that port. Using the Windows `netstat -o -n -a -b` command can help to get to the bottom of this quickly. If you're still in trouble, check the Tentacle [log files](/docs/support/log-files) and contact Octopus Deploy support. ::: Next, repeat the process of connecting to the Tentacle with a web browser, but do this *from the Octopus Server machine*. When forming the URL to check: - First try using the Tentacle's DNS hostname, e.g. `https://my-tentacle:10933`. - If this fails, try using the Tentacle's IP address instead, e.g. `https://1.2.3.4:10933` - success using the IP address but not the DNS hostname will indicate a DNS issue. **If you can't connect...** Failing to connect at this step means that you have a network issue preventing traffic between the Octopus Server and Tentacles. Check that the Tentacle port is open in any firewalls, and that other services on the network are working. There's not usually much that Octopus Deploy Support can suggest for these issues as networks are complex and highly varied. Having the network administrator from your organization help diagnose the issue is the best first step. If that draws a blank, please get in touch. Remember to check both the built-in Windows Firewall, and any other firewalls (in Amazon EC2, check your security group settings for example). :::div{.problem} **Watch out for proxy servers or SSL offloading...** Octopus and Tentacle use TCP to communicate, with special handling to enable web browsers to connect for diagnostic purposes. Full HTTP is not supported, so network services like **SSL offloading** are not supported, and **proxies** are not supported in earlier versions of Octopus Deploy. Make sure there's a direct connection between the Octopus Server and Tentacle, without an HTTP proxy or a network appliance performing SSL offloading in between. Also see, [advanced support for HTTP proxies](/docs/infrastructure/deployment-targets/proxy-support). ::: ## Tentacle ping We have built a small utility for testing the communications protocol between two servers called [Tentacle Ping](https://github.com/OctopusDeploy/TentaclePing). This tool helps isolate the source of communication problems without needing a full Octopus configuration. It is built as a simple client and server component that emulates the communications protocol used by Octopus Server and Tentacle. In **Octopus 3.0** you will need **TentaclePing** and **TentaclePong**, you cannot test directly to Octopus Server nor Tentacle: - Run **TentaclePing** on your Octopus Server machine (which is the client in this relationship). - Run **TentaclePong** on your Tentacle machine (which is the server in this relationship). Use the output to help diagnose what is going wrong. ## Check the IP address Your Octopus Server or Tentacle Server may have multiple IP addresses that they listen on. For example, in Amazon EC2 machines in a VPN might have both an internal IP address and an external addresses using NAT. Octopus Server and Tentacle Server may not listen on all addresses; you can check which addresses are configured on the server by running `ipconfig /all` from the command line and looking for the IPv4 addresses. ## Schannel and TLS configuration mismatches Octopus uses `Schannel` for secure communications and will attempt to use the best available protocol available to both servers. If you are seeing error messages like below, try [Troubleshooting Schannel and TLS](/docs/security/octopus-tentacle-communication/troubleshooting-schannel-and-tls): Client-side:`System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception. ---> System.ComponentModel.Win32Exception: One or more of the parameters passed to the function was invalid` Server-side:`System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.` ## Other error messages **Halibut.Transport.Protocol.ConnectionInitializationFailedException: Unable to process remote identity; unknown identity 'HTTP/1.0'** If a Tentacle health-check fails with an error message containing this error message, then there is network infrastructure inserting a web page into the communication. The most common components to do this are firewalls and proxy servers so it's recommend to check your network setup to verify connectivity between the two servers using the information above and then update your infrastructure appropriately. **Halibut.HalibutClientException: An error occurred when sending a request to 'https://my-tentacle:10933', before the request could begin: Attempted to read past the end of the stream.** If your Octopus server certificate was [generated with SHA1](/docs/security/cve/shattered-and-octopus-deploy) then you might get this error when connecting to modern Linux distributions, as the default security configuration now rejects communication using SHA1. To regenerate your Octopus server certificate, follow the documentation [How to regenerate certificates with Octopus Server and Tentacle](/docs/security/octopus-tentacle-communication/regenerate-certificates-with-octopus-server-and-tentacle). ## Check for zombie child processes locking TCP ports If Tentacle fails to start with an error message like this: **A required communications port is already in use.** The most common scenario is when you already have an instance of Tentacle (or something else) listening on the same TCP port. However, we have seen cases where there is no running Tentacle in the list of processes. In this very specific case it could be due to a zombie PowerShell.exe or Calamari.exe process that was launched by Tentacle that is still holding the TCP port. This can happen when attempting to cancel a task that has hung inside of Calamari/PowerShell. Simply rebooting the machine, or killing the zombie process will fix this issue, and you should be able to start Tentacle successfully. ## Check the server service account permissions If the Tentacle is running as the *Local System* account you can skip this section. If the Tentacle is running as a specific user, make sure that the user has "full control" permission to the *Octopus Home* folder on the Tentacle machine. This is usually `C:\Octopus` - apply permissions recursively. ## Check the load time In some DMZ-style environments without Internet access, failing to disable Windows code signing certificate revocation list checking will cause Windows to pause during loading of the Octopus applications and installers. This can have a significant negative performance impact, which may prevent Octopus and Tentacles connecting. To test this on a listening Tentacle, run: ```powershell Tentacle.exe help ``` If the command help is not displayed immediately (< 1s) you may need to consider disabling the CRL check while the Tentacle is configured. To do this open **Control Panel ➜ Internet Options ➜ Advanced**, and uncheck the *Check for publisher's certificate revocation* option as shown below. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/images/3278077.png) ::: ## Uninstall Tentacles If you get to the end of this guide without success, it can be worthwhile to completely remove the Tentacle configuration, data, and working folders, and then reconfigure it from scratch. This can be done without any impact to the applications you have deployed. Learn about [manually uninstalling Tentacle](/docs/administration/managing-infrastructure/tentacle-configuration-and-file-storage/manually-uninstall-tentacle). Working from a clean slate can sometimes expose the underlying problem. # Windows Tentacle Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows.md When you deploy software to Windows servers, you need to install Tentacle, a lightweight agent service, on your Windows servers so they can communicate with the Octopus Server. When installed, Tentacles: - Run as a Windows service called **OctopusDeploy Tentacle**. - Wait for tasks from Octopus (deploy a package, run a script, etc). - Report the progress and results back to the Octopus Server. Before you install Tentacle, review the software and hardware requirements for: - [The latest version of Tentacle](/docs/infrastructure/deployment-targets/tentacle/windows/requirements). - [Versions prior to Tentacle 3.1](/docs/infrastructure/deployment-targets/tentacle/windows/requirements/legacy-requirements). ## .NET Framework dependency Tentacle for Windows is published as either a .NET Framework dependent executable or a self-contained executable. The framework-dependent Tentacle will require a compatible version of .NET Framework to be installed. The self-contained Tentacle bundles the .NET runtime with the application. To learn more about .NET publishing options, see the [Microsoft docs](https://learn.microsoft.com/en-us/dotnet/core/deploying/). Once you have installed one type of Tentacle, future updates of Tentacle done via Octopus Server will respect the current publishing mode. ## Communication mode Tentacles can be configured to communicate in Listening mode or Polling mode. Listening mode is the recommended communication style. Learn about the differences between the two modes and when you might choose to use Polling mode instead of Listening mode on the [Tentacle communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) page. ## Download the Tentacle installer Octopus Tentacle is available to download for both Windows and Linux (GZip, APT, and RPM) from the [downloads page](https://octopus.com/downloads). ## Configure a Listening Tentacle (recommended) Before you can configure your Windows servers as Tentacles, you need to install Tentacle Manager on the machines that you plan to use as Tentacles. Tentacle Manager is the Windows application that configures your Tentacle. Once installed, you can access it from your start menu/start screen. Tentacle Manager can configure Tentacles to use a [proxy](/docs/infrastructure/deployment-targets/proxy-support), delete the Tentacle, and show diagnostic information about the Tentacle. 1. Start the Tentacle installer, accept the license agreement, and follow the prompts. 2. When the Octopus Deploy Tentacle Setup Wizard has completed, click **Finish** to exit the wizard. 3. When the Tentacle Manager launches, click **GET STARTED**. 1. On the communication style screen, select **Listening Tentacle** and click **Next**. 1. In the **Octopus Web Portal**, navigate to the **Infrastructure** tab, select **Deployment Targets** and click **ADD DEPLOYMENT TARGET ➜ WINDOWS**, and select **Listening Tentacle**. 1. Copy the **Thumbprint** (the long alphanumerical string). 1. Back on the Tentacle server, accept the default listening port **10933** and paste the **Thumbprint** into the **Octopus Thumbprint** field and click **Next**. 1. Click **INSTALL**, and after the installation has finished click **Finish**. 1. Back in the **Octopus Web Portal**, enter the hostname or IP address of the machine the Tentacle is installed on, i.e., `example.com` or `10.0.1.23`, and click **NEXT**. 1. Add a display name for the deployment target (the server where you just installed the Listening Tentacle). 1. Select which [environments](/docs/infrastructure/environments) the deployment target will be assigned to. 1. Choose or create at least one [target tag](/docs/infrastructure/deployment-targets/#target-roles) for the deployment target and click **Save**. Your deployment target is configured, next you need to preform a [health check and update Calamari](/docs/infrastructure/deployment-targets/machine-policies/#health-check). If the Tentacle isn't connecting, try the steps on the [troubleshooting page](/docs/infrastructure/deployment-targets/tentacle/troubleshooting-tentacles). ### Update your Tentacle firewall To allow your Octopus Server to connect to the Tentacle, you'll need to allow access to TCP port **10933** on the Tentacle (or the port you selected during the installation wizard). #### Intermediary firewalls** Don't forget to allow access in any intermediary firewalls between the Octopus Server and your Tentacle (not just the Windows Firewall). For example, if your Tentacle server is hosted in Amazon EC2, you'll also need to modify the AWS security group firewall to tell EC2 to allow the traffic. Similarly, if your Tentacle server is hosted in Microsoft Azure, you'll also need to add an Endpoint to tell Azure to allow the traffic. ## Configure a Polling Tentacle Listening Tentacles are recommended, but there might be situations where you need to configure a Polling Tentacle. You can learn about the difference between Listening Tentacles and Polling Tentacles on the [Tentacle communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) page. Before you can configure your Windows servers as Tentacles, you need to install Tentacle Manager on the machines that you plan to use as Tentacles. Tentacle Manager is the Windows application that configures your Tentacle. Once installed, you can access it from your start menu/start screen. Tentacle Manager can configure Tentacles to use a [proxy](/docs/infrastructure/deployment-targets/proxy-support), delete the Tentacle, and show diagnostic information about the Tentacle. 1. Start the Tentacle installer, accept the license agreement, and follow the prompts. 2. When the Octopus Deploy Tentacle Setup Wizard has completed, click **Finish** to exit the wizard. 3. When the Tentacle Manager launches, click **GET STARTED**. 1. On the communication style screen, select **Polling Tentacle** and click **Next**. 1. If you are using a proxy see [Proxy Support](/docs/infrastructure/deployment-targets/proxy-support), or click **Next**. 1. Add the Octopus credentials the Tentacle will use to connect to the Octopus Server: a. The Octopus URL: the hostname or IP address. b. Select the authentication mode and enter the details: i. The username and password you use to log into Octopus, or: i. Your Octopus API key, see [How to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key). :::div{.hint} The Octopus credentials specified here are only used once to configure the Tentacle. All future communication is performed over a [secure TLS connection using certificates](/docs/security/octopus-tentacle-communication/#Octopus-Tentaclecommunication-Scenario:PollingTentacles). ::: 1. Click **Verify credentials**, and then next. 1. Give the machine a meaningful name and select which [environments](/docs/infrastructure/environments) the deployment target will be assigned to. 1. Choose or create at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the deployment target. 1. Leave **Tenants** and **Tenant tags** blank unless you are already using Octopus to deploy applications to multiple end users. If you are using Octopus for multiple tenants, enter the **Tenants** and **Tenant Tags**. Learn more about [Multi-tenant Deployments](/docs/tenants). 1. Click **Install**, and when the script has finished, click **Finish**. Your deployment target is configured, next you need to preform a [health check and update Calamari](/docs/infrastructure/deployment-targets/machine-policies/#health-check). If the Tentacle isn't connecting, try the steps on the [troubleshooting page](/docs/infrastructure/deployment-targets/tentacle/troubleshooting-tentacles). ### Update your Octopus Server firewall To allow Tentacle to connect to your Octopus Server, you'll need to allow access to port **10943** on the Octopus Server (or the port you selected during the installation wizard - port 10943 is just the default). You will also need to allow Tentacle to access the HTTP Octopus Web Portal (typically port **80** or **443** - these bindings are selected when you [install the Octopus Server](/docs/installation)). If your network rules only allow port **80** and **443** to the Octopus Server, you can either: - Change the server bindings to either HTTP or HTTPS and use the remaining port for polling Tentacle connections. - The listening port Octopus Server uses can be [changed from the command line](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) using the `--commsListenPort` option. Even if you do use port **80** for Polling Tentacles, the communication is still secure. - Use a reverse proxy to redirect incoming connections to the Tentacle listening port on Octopus Server by differentiating the connection based on Hostname (TLS SNI) or IP Address - The polling endpoint Tentacle uses can be [changed from the command line](/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443/#self-hosted) using the `--server-comms-address` option. - You can learn about this configuration on the [Polling Tentacles over standard HTTPS port](/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443) page. Note that the port (or address) used to poll Octopus for jobs is different from the port (or address) used by your team to access the Octopus Deploy web interface; this is on purpose, and it means you can use different firewall conditions to allow Tentacles to access the Octopus Server by IP address. Using polling mode, you won't typically need to make any firewall changes on the Tentacle machine. ### Intermediary firewalls Don't forget to allow access not just in Windows Firewall, but also any intermediary firewalls between the Tentacle and your Octopus Server. For example, if your Octopus Server is hosted in Amazon EC2, you'll also need to modify the AWS security group firewall to tell EC2 to allow the traffic. Similarly if your Octopus Server is hosted in Microsoft Azure you'll also need to add an Endpoint to tell Azure to allow the traffic. ## Windows Server 2008 Limited Support From Octopus Server release `2026.1` there will be [limited support](https://octopus.com/docs/deprecations#dropping-capability-for-windows-server-2008-workers-and-targets-in-20251) for executing workloads on targets or workers running Windows Server 2008. As the version of [Calamari](https://octopus.com/docs/octopus-rest-api/calamari) shipped with later version of Octopus Server will no longer be compatible with Windows Server 2008, the capability has been made available to pin the version of Calamari used on the target itself. To allow this: 1. Download the [most recent version of Calamari](https://download.octopusdeploy.com/calamari/Calamari.Legacy.zip) (verify with [sha256](https://download.octopusdeploy.com/calamari/Calamari.Legacy.sha256) or [md5](https://download.octopusdeploy.com/calamari/Calamari.Legacy.md5) checksum). This is version of Calamari is unlikely to be updated unless there are significant vulnerabilities discovered so it may be missing new capabilities that are released later than `2026.1`. 2. Extract the zip contents to a location on the Tentacle. 3. Set an environment variable `CalamariDirectoryPath` with a value of the extracted location. This variable should be provided in a context that will be available to the Tentacle process. 4. Restart the Tentacle process to ensure that it has access to this new variable. The next time a deployment is run, Octopus Server will ignore the embedded Calamari version and instead use the one available on the Tentacle. Warnings will continue to be logged to provide a clear signal that this limited support is considered a temporary stop-gap, as we are likely to drop support entirely for this workaround from `2026.3`. # Tentacle installation requirements Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/requirements.md If you're using a version prior to **Tentacle 3.1**, refer to the [installation requirements for older versions of Tentacle](/docs/infrastructure/deployment-targets/tentacle/windows/requirements/legacy-requirements). The installation requirements for the latest version of Tentacle are: ## Windows - Windows Server 2012 - Windows Server 2012 R2 - Windows Server 2016 (Both "Server Core" and "Server with a GUI" installations are supported for Tentacle). - Windows Server 2019 - Windows Server 2022 - Windows Server 2025 - Windows 10 Enterprise LTSC 2021 (Version 21H2) :::div{.warning} Octopus does not actively test against Windows 2008 nor Windows 2008 R2. Certain operating system specific issues may not be fixed as [Microsoft no longer supports Windows 2008](https://docs.microsoft.com/en-us/lifecycle/products/windows-server-2008) nor [Windows 2008R2](https://docs.microsoft.com/en-us/lifecycle/products/windows-server-2008-r2). ::: ## .NET Framework The table below outlines the **minimum** required version of .NET Framework for Tentacle **3.1** and higher. | Tentacle | .NET Framework version | | -------------- | ---------------------- | | 3.1 -> 4.0.7 | 4.5.1+ ([download](https://dotnet.microsoft.com/download/dotnet-framework/thank-you/net451-web-installer)) | | 5.0 -> 6.2.277 | 4.5.2+ ([download](https://dotnet.microsoft.com/download/dotnet-framework/thank-you/net452-web-installer)) | | 6.3 -> latest | 4.8+ ([download](https://dotnet.microsoft.com/download/dotnet-framework/thank-you/net48-web-installer)) | ## Windows PowerShell - Windows PowerShell 2.0. This is automatically installed on 2008 R2. - Windows PowerShell 3.0 or 4.0 is recommended, both of which are compatible with PowerShell 2.0, but execute against .NET 4.0+. - Windows PowerShell 5.1 is required to run Azure steps ## Hardware requirements - Hardware minimum: 512MB RAM, 1GHz CPU, 2GB free disk space. Tentacle uses a small amount of memory when idle, usually around 10MB (it may appear higher in task manager because memory is shared with other .NET processes that are running). When deploying, depending on what happens during the deployment, this may expand to 60-100MB, and will then go back down after the deployment is complete. Tentacle will happily run on single-core machines, and only uses about 100MB of disk space, though, of course, you'll need more than that to deploy your applications. ## Download the Tentacle installer Octopus Tentacle is available to download for both Windows and Linux (GZip, APT, and RPM) from the [downloads page](https://octopus.com/downloads). ## Python Octopus can run Python scripts on Windows targets provided the following criteria are met: - Python version 3.4+ is installed - `Python` is on the path for the user that Tentacle is running as - pip is installed or the pycryptodome python package is installed # Kubernetes Worker Source: https://octopus.com/docs/infrastructure/workers/kubernetes-worker.md The Kubernetes Worker allows worker operations to be executed within a Kubernetes cluster in a scalable manner. This allows compute resources used during the execution of a Deployment process (or runbook) are released when the Deployment completes. The Octopus Web portal provides a wizard which constructs guides you through the creation of a Helm installation command which installs the Kubernetes Worker in your cluster. Once installed, the Kubernetes Worker functions as a standard Octopus worker: - It must be included in 1 or more worker pools - Supports deployments to any deployment target - Will be kept up to date via machine health checks & updates - Can execute operations in custom containers (as defined on the deployment step) ## Default Behavior The web portal's [installation process](/docs/infrastructure/workers#installing-a-kubernetes-worker) installs a worker which will work for a majority of workloads. When the Kubernetes worker executes a deployment step, it executes the operation within a [worker-tools](https://hub.docker.com/r/octopusdeploy/worker-tools) container, meaning sufficient tooling is available for most deployment activities. If a step requires specific tooling, you are able to set the desired container on the step - the Kubernetes Worker honours this setting as per other worker types. ## Customizations The behavior of the Kubernetes Worker can be modified through [Helm chart](https://github.com/OctopusDeploy/helm-charts/tree/main/charts/kubernetes-agent) `Values`. These values can be set during installation (by editing the Octopus Server supplied command line), or at any time via a Helm upgrade. Of note: | Value | Purpose | | ----------------------------- | ------------------------------------------------------------------------- | | scriptPods.worker.image | Specifies the Docker container image to be used when running an operation | | scriptPods.resources.requests | Specifies the average cpu/memory usage required to execute an operation | If you are experiencing difficulties with your Kubernetes Cluster's autoscaling, modifying `scriptPods.resources.requests.*` may provide a solution. Too low (i.e. lower than your actual CPU usage) and the cluster will not provision new nodes when required. Too large (i.e. higher than actual usage) then the cluster will scale too early, and may leave your script pods pending for longer than necessary. ## Permissions The Kubernetes Worker is limited to modifying its local namespace, preventing it from polluting the cluster at large. The Kubernetes Worker is permitted unfettered access to its local namespace, ensuring it is able to update itself, and create new pods for each requested operation. The Kubernetes Worker allows execution permissions to be overwritten in the same way as the [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent/permissions). ## Limitations Being securely hosted inside a kubernetes cluster comes with some limitations - the primary of which is the lack of `Docker`. Which means certain operations which are typically valid, may not be possible. Specifically: - Creating an [inline execution container](/docs/projects/steps/execution-containers-for-workers#inline-execution-containers) - Fetching Docker images (when used as secondary packages) - Arbitrary scripts which use docker # Local File Storage Source: https://octopus.com/docs/installation/file-storage/local-storage.md When you opt to store the binary files on local network storage, there are a few items to keep in mind: - When Octopus is hosted on Windows it can be a mapped network drive e.g. `X:\` or a UNC path to a file share e.g. `\\server\share`. - When Octopus is hosted as a container it must be as a volume. - The service account that Octopus runs needs **full control** over the directory. - Drives are mapped per-user, so you should map the drive using the same service account that Octopus is running under. Most commercial network storage solutions will work perfectly well with Octopus Deploy. They will use industry-standard mechanisms for data integrity, for example, RAID or ZFS, and have the capability to perform regular backups. ## High Availability With Octopus Deploy's [High Availability](/docs/administration/high-availability) functionality, you connect multiple nodes to the same database and file storage. Octopus Server makes specific assumptions about the performance and consistency of the file system when accessing log files, performing log retention, storing deployment packages and other deployment artifacts, exported events, and temporary storage when communicating with Tentacles. What that means is: - Octopus Deploy is sensitive to network latency. It expects the file system to be hosted in the same data center as the virtual machines or container hosts running the Octopus Deploy Service. - It is extremely rare for two or more nodes to write to the same file at the same time. - It is common for two or more nodes to read the same file at the same time. In our experience, you will have the best experience when all the nodes and the file system are located in the same data center. Modern network storage devices and operating systems handle almost all the scenarios a highly available instance of Octopus Deploy will encounter. ## Disaster Recovery For disaster recovery scenarios, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). The file system should asynchronously copy files to a secondary data center. When a disaster occurs, create the nodes in the secondary data center and configure them to use that secondary file storage location. There are many robust syncing solutions, but even simple tools like rsync or robocopy will work. As long as they periodically run, that is all that matters. One common approach we've seen is leveraging a distributed file system (DFS) such as [Microsoft DFS](https://en.wikipedia.org/wiki/Distributed_File_System_(Microsoft)). DFS will work with Octopus Deploy, but it must be configured in a specific way. Unless you configure it properly you'll encounter a [split-brain](https://en.wikipedia.org/wiki/Split-brain_(computing)) scenario. ## DFS :::div{.warning} **DFS in the standard configuration (i.e., accessed through a DFS Namespace Root) is _not_ suitable for use as a shared file store with Octopus Deploy.** Operating Octopus Deploy with the non-recommended DFS configuration will likely result in intermittent and potentially significant issues. ::: Below are recommendations and more details on: - [High Availability](#high-availability) - [Disaster Recovery](#disaster-recovery) - [DFS](#dfs) - [Configuring DFS with a Single Octopus Server](#configuring-dfs-with-a-single-octopus-server) - [Configuring DFS with a Multi-Node Octopus Server cluster (Octopus HA)](#configuring-dfs-with-a-multi-node-octopus-server-cluster-octopus-ha) - [DFS for Redundancy (Disaster Recovery)](#dfs-for-redundancy-disaster-recovery) ### Configuring DFS with a Single Octopus Server For a single-node Octopus Server using DFS for file storage, the node must be **configured to use a specific DFS Replica and not the DFS Namespace Root**. Despite no contention between nodes in the single-node configuration, there is still the DFS location transparency, which will cause unpredictable behavior when the node is directed to a different replica. In the diagram, the single node is configured to use the replica `\\SVR_ONE\public` as the DFS file share and not the namespace root (`\\Contoso\public`). :::figure ![A single Octopus Deploy node with DFS shared storage](/docs/img/getting-started/best-practices/images/single-node-od-with-dfs.png) ::: ### Configuring DFS with a Multi-Node Octopus Server cluster (Octopus HA) For a multi-node Octopus cluster using DFS for file storage, it is imperative that **_all_ nodes in the cluster are configured to use the same DFS Replica and not the DFS Namespace Root**. Both using the namespace root or using different replicas for different Octopus nodes will cause unpredictable behavior. In the diagram below each node in the cluster is configured to use the same replica (`\\SVR_ONE\public`) as the DFS file share and not the namespace root (`\\Contoso\public`). :::figure ![A multi-node (HA) Octopus Cluster with DFS shared storage](/docs/img/getting-started/best-practices/images/multi-node-od-with-dfs.png) ::: ### DFS for Redundancy (Disaster Recovery) DFS can still be used for redundancy and disaster recovery, as usual. If the replica that Octopus is configured to use becomes unavailable, simply changing the configuration to another replica in the DFS Namespace with the same target folders is sufficient to restore service. Octopus does not need to be restarted in this scenario. Customers can either do this manually or can automate it. In the simplified diagram below, when an outage at DFS Replica `\\SVR_ONE\Public` occurs, by re-configuring each Octopus node to use a different replica (ensuring all nodes are re-configured to the same replica), customers can still take advantage of the redundancy within DFS. :::figure ![Using DFS for redundancy with Octopus Deploy](/docs/img/getting-started/best-practices/images/dfs-for-redundancy.png) ::: # Configuring Netscaler Source: https://octopus.com/docs/installation/load-balancers/configuring-netscaler.md The following script shows how to configure a Netscaler load balancer for use with an Octopus High Availability instance. ```bash #Servers add server octopus-node1_SVR 192.168.0.1 add server octopus-node2_SVR 192.168.0.2 #Service Group add serviceGroup octopusdeploy_GRP HTTP bind serviceGroup octopusdeploy_GRP octopus-node1_SVR 80 bind serviceGroup octopusdeploy_GRP octopus-node2_SVR 80 bind serviceGroup octopusdeploy_GRP -monitorName ping #LB add lb vserver octopusdeploy_LB HTTP 0.0.0.0 0 bind lb vserver octopusdeploy_LB octopusdeploy_GRP #HTTP CS add cs vserver octopusdeploy_CS_HTTP HTTP 10.0.0.1 80 -cltTimeout 180 -listenPolicy None bind cs vserver octopusdeploy_CS_HTTP -lbvserver ssl-only-redirect_LB #HTTP CS add cs vserver octopusdeploy_CS_HTTPS SSL 10.0.0.1 443 -cltTimeout 180 -listenPolicy None bind cs vserver octopusdeploy_CS_HTTPS -lbvserver octopusdeploy_LB #Cipher and Cert Bindings bind ssl vserver octopusdeploy_CS_HTTPS -cipherName DEFAULT_HA_CIPHERS bind ssl vserver octopusdeploy_CS_HTTPS -certkeyName your-domain.com ``` # Use NGINX as a reverse proxy for Octopus Deploy Source: https://octopus.com/docs/installation/load-balancers/use-nginx-as-reverse-proxy.md There are scenarios in which you may be required to run Octopus Deploy behind a reverse proxy, such as compliance with specific organization standards or a need to add custom HTTP headers. This document outlines how to use NGINX as that reverse proxy. This example assumes: - NGINX will terminate your SSL connections. - [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) are not required. Our starting configuration: - Octopus Deploy installed and running on For guidance on this topic, see [Installing Octopus](/docs/installation). - Valid SSL certificate NGINX recognizes with a .key file. At the end of this walk-through, you should be able to: - Communicate with Octopus Deploy over a secure connection. - Use NGINX as a load balancer. Unlike a web server such as Microsoft's Internet Information Services (IIS), NGINX doesn't have a user interface. All configuration in NGINX is done via a configuration file such as the `nginx.conf` file. An SSL certificate doesn't have to be "installed" in a certificate store. They are placed in a folder, and the configuration file references them. See [NGINX's documentation](https://docs.nginx.com/nginx/admin-guide/) for more details. ## NGINX hosted on a server Follow these steps if you're running NGINX directly on a server, such as Windows or Linux. The first step is to copy the SSL certificate to a folder NGINX can access, for example, `/etc/nginx`. This example will use two files, `STAR_octopusdemos.app.pem` and `STAR_octopusdemos.app.key`. The .pem file contains the entire certificate chain. :::div{.warning} The certificate file (.crt, .pem, etc.) should contain the entire certificate chain. Failure to do so could cause the browser to reject the certificate. ::: The next step is to modify the configuration file. The file to edit depends on how the NGINX server use case. Modify the `/etc/nginx/nginx.conf` file if this reverse proxy is the only item hosted by NGINX. Otherwise, modify the appropriate `/etc/nginx/sites-enabled/[Site_name.com.conf]` or `/etc/nginx/conf.d/[Site_Name.com.conf]` file. Below is an example reverse proxy configuration: ``` upstream octopusdeploy { server servername:8080; } server { listen 443 ssl; server_name localhost; ssl_certificate /etc/nginx/STAR_octopusdemos_app.pem; ssl_certificate_key /etc/nginx/STAR_octopusdemos_app.key; location / { proxy_set_header Host $host; proxy_pass http://octopusdeploy; } } ``` ### gRPC Communications Octopus generates a self signed certificate for gRPC communications. When the a gRPC client needs to connect to Octopus via a load balancer, there are two common methods to achieve this. #### TLS/SSL Bridging `grpc_ssl_verify off` allows us to use a self signed certificate outside of the CA chain of trust. The `ssl_certificate` refers to your CA certificate used for the HTTPS configuration. ``` upstream octopusdeploy_grpc { server servername:8443; } server { listen 8443 ssl http2; ssl_certificate /etc/nginx/ssl/octopusdeploy.pem; ssl_certificate_key /etc/nginx/ssl/octopusdeploy.key; location / { proxy_set_header Host $host; grpc_pass grpcs://octopusdeploy_grpc; grpc_ssl_verify off; } } ``` #### TLS/SSL Passthrough ``` stream { upstream octopusdeploy_grpc { server OctopusServer1:8443; server OctopusServer2:8443; } server { listen 8443; proxy_pass octopusdeploy_grpc; } } ``` ## NGINX hosted in a Docker Container NGINX 1.19 added support for environment variables. Instead of modifying the `nginx.conf` file, you'll create a `default.conf.template` file. The environment variable is `${OCTOPUS_SERVER}`. That value will be replaced when the Docker container starts up. ``` upstream octopusdeploy { server ${OCTOPUS_SERVER}; } server { listen 443 ssl; listen 90; server_name localhost; ssl_certificate /etc/nginx/STAR_octopusdemos_app.pem; ssl_certificate_key /etc/nginx/STAR_octopusdemos_app.key; location / { proxy_set_header Host $host; proxy_pass http://octopusdeploy; } } ``` The Dockerfile will copy that template file to `/etc/nginx/templates/default.conf.template` and copy in the certificate and key files. ``` FROM nginx:latest ENV OCTOPUS_SERVER servername:8080 COPY ./default.conf.template /etc/nginx/templates/default.conf.template COPY ./STAR_octopusdemos_app.key /etc/nginx/STAR_octopusdemos_app.key COPY ./STAR_octopusdemos_app.pem /etc/nginx/STAR_octopusdemos_app.pem ``` Build the Docker image like any other Docker image. The `-t` parameter tags the image to make it easier to reference. Replace `octopusbob` with the name of your repository. ``` docker build -t octopusbob/nginx:1.0.0 -t octopusbob/nginx:latest . ``` ### Running the NGINX Container Then you can run the Docker image in a container by running the command. ``` docker run --name octopus-reverse-proxy -p 443:443 -e OCTOPUS_SERVER=servername:8080 octopusbob/nginx:latest ``` ### Referencing the NGINX Container in Docker Compose If you prefer, you can run the image via a docker-compose file. ``` version: '3' services: db: image: ${SQL_IMAGE} environment: SA_PASSWORD: ${SA_PASSWORD} ACCEPT_EULA: ${ACCEPT_EULA} # Prevent SQL Server from consuming the default of 80% physical memory. MSSQL_MEMORY_LIMIT_MB: 4096 ports: - 1401:1433 healthcheck: test: [ "CMD", "/opt/mssql-tools/bin/sqlcmd", "-U", "sa", "-P", "${SA_PASSWORD}", "-Q", "select 1"] interval: 10s retries: 10 volumes: - ./mssql:/var/opt/mssql/data octopus: image: octopusdeploy/octopusdeploy:${OCTOPUS_SERVER_TAG} privileged: true environment: ACCEPT_EULA: ${ACCEPT_OCTOPUS_EULA} OCTOPUS_SERVER_NODE_NAME: ${OCTOPUS_SERVER_NODE} DB_CONNECTION_STRING: ${DB_CONNECTION_STRING} MASTER_KEY: ${MASTER_KEY} ports: - 8080:8080 - 10943:10943 depends_on: - db volumes: - ./taskLogs:/taskLogs - ./artifacts:/artifacts - ./repository:/repository - ./eventExports:/eventExports nginx: image: ${NGINX_IMAGE} environment: OCTOPUS_SERVER: ${OCTOPUS_SERVER} ports: - 443:443 depends_on: - db - octopus ``` The .env file will look something like this: ``` # It is highly recommended this value is changed as it's the password used for the database user. SA_PASSWORD=REPLACE ME! # Tag for the Octopus Server image. See https://hub.docker.com/repository/docker/octopusdeploy/octopusdeploy for the tags. OCTOPUS_SERVER_TAG=2020.4.0 # Sql Server image. Set this variable to the version you wish to use. Default is to use the latest. SQL_IMAGE=mcr.microsoft.com/mssql/server # NGINX Server Image NGINX_IMAGE=octopusbob/nginx:latest # Octopus Server Port OCTOPUS_SERVER=servername:8080 # The default created user username for login to the Octopus Server ADMIN_USERNAME=USER.NAME! # It is highly recommended this value is changed as it's the default user password for login to the Octopus Server ADMIN_PASSWORD=REPLACE ME! # Email associated with the default created user. If empty will default to octopus@example.local ADMIN_EMAIL=test@test.com # Accept the Microsoft Sql Server Eula found here: https://go.microsoft.com/fwlink/?linkid=857698 ACCEPT_EULA=Y # Use of this Image means you must accept the Octopus Deploy Eula found here: https://octopus.com/company/legal ACCEPT_OCTOPUS_EULA=Y # Unique Server Node Name - If left empty will default to the machine Name OCTOPUS_SERVER_NODE=HANode01 # Database Connection String. If using database in sql server container, it is highly recommended to change the password. DB_CONNECTION_STRING=Server=db,1433;Database=OctopusDeploy;User=sa;Password=REPLACE ME! # Your License key for Octopus Deploy. If left empty, it will try and create a free license key for you OCTOPUS_SERVER_BASE64_LICENSE=CONVERT YOUR LICENSE TO BASE 64 # Octopus Deploy uses a Master Key for encryption of your database. If you're using an external database that's already been setup for Octopus Deploy, you can supply the Master Key to use it. If left blank, a new Master Key will be generated with the database creation. MASTER_KEY= # The API Key to set for the administrator. If this is set and no password is provided then a service account user will be created. If this is set and a password is also set then a standard user will be created. ADMIN_API_KEY= # Sets the task cap for this node. If not specified the default is 5. TASK_CAP=20 ``` ## NGINX as a Load Balancer NGINX can be used as a load balancer for Octopus Deploy configured for [High Availability](/docs/administration/high-availability). To do so, add all the HA nodes to this section. ``` upstream octopusdeploy { server servername:8080; server servername02:8080; } ``` The full file will look like: ``` http { upstream octopusdeploy { server servername:8080; server servername02:8080; } server { listen 443 ssl; server_name localhost; ssl_certificate /etc/nginx/STAR_octopusdemos_app.pem; ssl_certificate_key /etc/nginx/STAR_octopusdemos_app.key; location / { proxy_set_header Host $host; proxy_pass http://octopusdeploy; } } } ``` By default, NGINX uses round-robin. The Octopus Deploy UI is stateless; round-robin should work without issues. Another option is the least connections, where the server routes the request with the least amount of active connections. See the [NGINX documentation](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/#choosing-a-load-balancing-method) for more details on load balancing. # Octopus Server Container with Docker Compose Source: https://octopus.com/docs/installation/octopus-server-linux-container/docker-compose-linux.md For evaluation purposes you may want to run a stand-alone SQL Server instance alongside the Octopus Server. For this scenario, you can leverage [Docker Compose](https://docs.docker.com/compose/overview/) to spin up and manage a multi-container Docker application as a single unit. The following example is a simple `docker-compose.yml` file combining a SQL Server instance with a dependent Octopus Server: ```yaml version: '3' services: db: image: ${SQL_IMAGE} environment: SA_PASSWORD: ${SA_PASSWORD} ACCEPT_EULA: ${ACCEPT_EULA} ports: - 1401:1433 healthcheck: test: [ "CMD", "/opt/mssql-tools/bin/sqlcmd", "-U", "sa", "-P", "${SA_PASSWORD}", "-Q", "select 1"] interval: 10s retries: 10 volumes: - sqlvolume:/var/opt/mssql octopus-server: image: octopusdeploy/octopusdeploy:${OCTOPUS_SERVER_TAG} privileged: ${PRIVILEGED} user: ${CONTAINER_USER} environment: ACCEPT_EULA: ${ACCEPT_OCTOPUS_EULA} OCTOPUS_SERVER_NODE_NAME: ${OCTOPUS_SERVER_NODE_NAME} DB_CONNECTION_STRING: ${DB_CONNECTION_STRING} ADMIN_USERNAME: ${ADMIN_USERNAME} ADMIN_PASSWORD: ${ADMIN_PASSWORD} ADMIN_EMAIL: ${ADMIN_EMAIL} OCTOPUS_SERVER_BASE64_LICENSE: ${OCTOPUS_SERVER_BASE64_LICENSE} MASTER_KEY: ${MASTER_KEY} ADMIN_API_KEY: ${ADMIN_API_KEY} DISABLE_DIND: ${DISABLE_DIND} TASK_CAP: ${TASK_CAP} ports: - 8080:8080 - 11111:10943 - 8443:8443 depends_on: - db volumes: - repository:/repository - artifacts:/artifacts - taskLogs:/taskLogs - cache:/cache - import:/import - eventExports:/eventExports volumes: repository: artifacts: taskLogs: cache: import: eventExports: sqlvolume: ``` We will provide some of the environment variables to run this container with an additional `.env` file: ``` # Define the password for the SQL database. This also must be set in the DB_CONNECTION_STRING value. SA_PASSWORD= # Tag for the Octopus Deploy Server image. Use "latest" to pull the latest image or specify a specific tag OCTOPUS_SERVER_TAG=latest # Sql Server image. Set this variable to the version you wish to use. Default is to use the latest. SQL_IMAGE=mcr.microsoft.com/mssql/server # The default created user username for login to the Octopus Server ADMIN_USERNAME= # It is highly recommended this value is changed as it's the default user password for login to the Octopus Server ADMIN_PASSWORD= # Email associated with the default created user. If empty will default to octopus@example.local ADMIN_EMAIL= # Accept the Microsoft Sql Server Eula found here: https://go.microsoft.com/fwlink/?linkid=857698 ACCEPT_EULA=Y # Use of this Image means you must accept the Octopus Deploy Eula found here: https://octopus.com/company/legal ACCEPT_OCTOPUS_EULA=Y # Unique Server Node Name - If left empty will default to the machine Name OCTOPUS_SERVER_NODE_NAME= # Database Connection String. If using database in sql server container, it is highly recommended to change the password. DB_CONNECTION_STRING=Server=db,1433;Database=OctopusDeploy;User=sa;Password=THE_SA_PASSWORD_DEFINED_ABOVE # Your License key for Octopus Deploy. If left empty, it will try and create a free license key for you OCTOPUS_SERVER_BASE64_LICENSE= # Octopus Deploy uses a master key for encryption of your database. If you're using an external database that's already been setup for Octopus Deploy, # you can supply the master key to use it. # If left blank, a new master key will be generated with the database creation. # Create a new master key with the command: openssl rand 16 | base64 MASTER_KEY= # The API Key to set for the administrator. If this is set and no password is provided then a service account user will be created. # If this is set and a password is also set then a standard user will be created. # ADMIN_API_KEY= # Docker-In-Docker is used to support worker container images. It can be disabled by setting DISABLE_DIND to Y. # The container only requires the privileged setting if DISABLE_DIND is set to N. DISABLE_DIND=Y PRIVILEGED=false # Octopus can be run either as the user root or as octopus. CONTAINER_USER=octopus # Sets the task cap for this node. If not specified the default is 5. TASK_CAP=20 ``` You will have to supply your own values for `SA_PASSWORD`, `ADMIN_USERNAME`, and `ADMIN_PASSWORD`. It is also highly recommended that you create a new master key with the command `openssl rand 16 | base64` and supply it through the `MASTER_KEY` property before you boot Octopus for the first time. If a master key is not supplied, Octopus will generate one, and the generated value must be saved in the `MASTER_KEY` property when the Octopus container is restarted. Start both containers by running: ``` docker-compose --project-name octopus up -d ``` When both containers are healthy, you can browse directly to `http://localhost:8080` from your host machine. ## Trusting custom/internal Certificate Authority (CA) Octopus Server can interface with several external sources (feeds, git repos, etc.), and those sources are often configured to use SSL/TLS for secure communication. It is common for organizations to have their own Certificate Authority (CA) servers for their internal networks. A CA server can issue SSL certificates for internal resources, such as build servers or internally hosted applications, without purchasing from a third-party vendor. Technologies such as Group Policy Objects (GPO) can configure machines (servers and clients) to trust the CA automatically, so users don't have to configure trust for them manually. However, this is not inherited in containers. When attempting to configure a connection to an external resource with an untrusted CA, you'll most likely encounter an error similar to this: ``` Could not connect to the package feed. The SSL connection could not be established, see inner exception. The remote certificate is invalid because of errors in the certificate chain: UntrustedRoot ``` The recommended approach is to add the certificate to the Docker host, such as `/etc/ssl/certs`, and mount a volume to it inside the container. To do this, add a `volumes` section to the `octopus-server` container just after the `image` component ```yaml octopus-server: container_name: octopus-server image: octopusdeploy/octopusdeploy volumes: - /etc/ssl/certs:/etc/ssl/certs ``` ## Upgrade with Docker Compose If you have used the default image tag of `latest`, you can run `docker-compose pull` to download the most recent version of the image. Alternatively you can specify a fixed image tag via the `OCTOPUS_SERVER_TAG` property, and update the value as new images are released. The new Octopus container will mount the files persisted in the Docker volumes, and update the database as needed. For further information about the additional configuration of the SQL Server container consult the appropriate [Docker Hub repository information](https://hub.docker.com/_/microsoft-mssql-server) pages. It is generally advised, however, not to run SQL Server inside a container for production purposes. ## Learn more - [Docker blog posts](https://octopus.com/blog/tag/docker/1) - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # Migrate to Octopus Server Linux Container from Windows Server Source: https://octopus.com/docs/installation/octopus-server-linux-container/migration/migrate-to-server-container-linux-from-windows-server.md This guide will help you migrate an instance of Octopus hosted on a Windows Server to the Octopus Server Linux Container. ## Running the Octopus Server Linux Container We are confident in the Octopus Linux Docker image's reliability and performance. [Octopus Cloud](/docs/octopus-cloud) runs the Octopus Server Linux Container in AKS clusters in Azure. But to use the Octopus Server Linux Container in Octopus Cloud, we had to make some design decisions and level up our knowledge about Docker concepts. We recommend migrating from Windows to the Octopus Server Linux Container if you are okay with **all** these conditions: - You are familiar with Docker concepts, specifically around debugging containers, volume mounting, and networking. - You are comfortable with one of the underlying hosting technologies for Docker containers; Kubernetes, ACS, ECS, AKS, EKS, or Docker itself. - You understand Octopus Deploy is a stateful, not a stateless application, requiring additional monitoring. ## Differences between Windows Server and Linux Containers The differences between running Octopus Server on Windows Server and Linux Containers are as follows: - **Folder Paths:** Windows uses a folder structure with `\` slashes, for example, `C:\Octopus\Tasklogs`. Linux Containers follow a Linux folder structure, including `/` slashes. - **Pre-installed software:** Linux Containers typically include PowerShell Core and Bash but not .NET. You cannot pre-install any other software on the Octopus Linux Container. - **Software support:** The Linux Container doesn't support running F# scripts directly on the server. - **Authentication:** The Octopus Server Linux Container doesn't support Active Directory authentication. If you want to use Active Directory, you must connect to it via the [LDAP authentication provider](/docs/security/authentication/ldap). :::div{.hint} The LDAP authentication provider was introduced in Octopus Deploy **2021.2**. ::: ## Prep Work We recommend making the following changes and testing them on your existing Octopus Deploy instance before the move. This prep work will keep the number of changes made during the actual migration low. ### Migrate from Active Directory to LDAP Migrating from Active Directory to LDAP is not as simple as turning off Active Directory authentication and enabling LDAP authentication. As far as Octopus is concerned, they are two separate auth providers. Having Active Directory and LDAP enabled is treated the same as having Google Auth and LDAP enabled. Both users and teams are associated with 0 to N external identities. The external identities are stored in an array on the user or team object. For example, a user object with both Active Directory and LDAP could appear as: ```json { "Id": "Users-1", "Username": "professor.octopus", "DisplayName": "Professor Octopus", "IsActive": true, "IsService": false, "EmailAddress": "professor.octopus@octopus.com", "CanPasswordBeEdited": true, "IsRequestor": true, "Identities": [ { "IdentityProviderName": "Active Directory", "Claims": { "email": { "Value": "", "IsIdentifyingClaim": true }, "upn": { "Value": "professor.octopus@mycustomdomain.local", "IsIdentifyingClaim": true }, "sam": { "Value": "\\professor.octopus", "IsIdentifyingClaim": true }, "dn": { "Value": "Professor Octopus", "IsIdentifyingClaim": false } } }, { "IdentityProviderName": "LDAP", "Claims": { "email": { "Value": null, "IsIdentifyingClaim": true }, "upn": { "Value": "professor.octopus@mycustomdomain.local", "IsIdentifyingClaim": true }, "uan": { "Value": "professor.octopus", "IsIdentifyingClaim": true }, "dn": { "Value": "Professor Octopus", "IsIdentifyingClaim": false } } } ] } ``` To migrate from Active Directory to LDAP, you will need to: 1. Enable and configure the [LDAP auth provider](/docs/security/authentication/ldap). 2. Add the LDAP auth provider to each user and group. We created two scripts to help speed that up: - [Swap Active Directory groups with matching LDAP groups](/docs/octopus-rest-api/examples/users-and-teams/swap-ad-domain-group-with-ldap-group) for Octopus teams. - [Swap Active Directory login records with matching LDAP ones](/docs/octopus-rest-api/examples/users-and-teams/swap-users-ad-domain-to-ldap) for Octopus users. 3. Log out with your current user and log back in, ideally with a new test user. 4. Verify permissions are as expected. 5. Test a few more users out. 6. Disable the Active Directory auth provider. ### Configure a Windows Worker If you currently have many PowerShell and C# script steps configured to run on the Octopus Server, you will need to configure a Windows Worker to handle that responsibility. Under the covers, the Octopus Server includes a [built-in worker](/docs/security/built-in-worker). When you configure a step to run on the Octopus Server, it runs on the built-in worker. Switching from the Windows to the Linux Container means changing the underlying OS those steps previously ran on. If your scripts are not PowerShell Core compliant, this means they will fail. The vast majority of scripts we encounter work with both PowerShell 5.1 and PowerShell Core. However, if you have a lot of older scripts, there is a chance they could fail. Instead of running directly on the Octopus Server's built-in worker, you will need to offload that work onto Windows [workers](/docs/infrastructure/workers). When you create your first worker, you will notice a pre-existing worker pool, `Default Worker Pool`. When the `Default Worker Pool` does not have any workers, all tasks run configured to run on the Octopus Server run on the built-in worker. The fastest way to change all the steps configured to run on the Octopus Server to run on a worker is to add a worker to the `Default Worker Pool`. However, doing so is also the riskiest as you cause a lot of deployments to fail. Our recommendation is to keep that risk to a minimum. 1. Create a new worker pool, `Windows Worker Pool`. 1. Create the new Windows Servers and configure them as workers. Register them to the `Windows Worker Pool`. 1. Pick a handful of projects and update the deployment process to use the new `Windows Worker Pool`. 1. Create some test releases and deployments to ensure the new Windows Workers are working correctly. 1. Assuming the testing is successful, you can add those workers to the `Default Worker Pool` or update the remaining steps. ### Copy Files Octopus Deploy stores all the BLOB data (deployment logs, runbook logs, packages, artifacts, event exports etc.) on a file share. If you are moving from a single server, be it hosting Octopus in a Windows Container or directly on a Windows VM, you will need to copy files to your new storage provider. Once your shared storage provider has been created, you'll want to copy files over from these folders: - TaskLogs - Artifacts - Packages - EventExports If you are moving from a Windows VM, the default path for those folders is: `C:\Octopus`. For example, the task logs folder would be `C:\Octopus\TaskLogs`. If you are unsure of the path, you can find it in the Octopus Deploy UI by navigating to **Configuration ➜ Settings ➜ Server Folders**. :::div{.warning} Failure to copy files over to the new storage location for the Linux Container to access will result in the following: - Existing deployment and runbook run screens will be empty. - Project, Step Template, and Tenant images will not appear. - Attempting to download any existing artifacts will fail. - If you are using the built-in repository, any existing deployments that use packages hosted there will fail as they won't be able to access them. ::: ### Polling Tentacles Polling Tentacles are designed to handle connection interruptions. For example, when the Octopus Server is restarted. When the Octopus Server comes back online, any running Polling Tentacles will re-connect. If you are currently using Polling Tentacles, you will need to ensure: 1. The same server URL will be used after the move. 1. You enable the communication port used (default: `10943`) on the Octopus Server Linux Container. If you wish to use a new URL, you will need to run this script on each machine hosting the polling tentacles. Replace the server and API key with values specific to your instance. Windows: ``` C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=https://your-octopus-url --apikey=API-YOUR-KEY --server-comms-port=10943 ``` Linux: ``` /opt/octopus/tentacle/Tentacle poll-server --server=https://your-octopus-url --apikey=API-YOUR-KEY --server-comms-port=10943 ``` ## Folder paths The Dockerfile runs the Octopus Server installer each time the Octopus Server Windows Container or Octopus Server Linux Container starts up. The installer runs a series of commands to configure Octopus Deploy. The installer will run the [path](/docs/octopus-rest-api/octopus.server.exe-command-line/path) command to update the paths to leverage the different folder structure. For example: ``` ./Octopus.Server path --instance OctopusServer --nugetRepository "/repository" --artifacts "/artifacts" --taskLogs "/taskLogs" --eventExports "/eventExports" --cacheDirectory="/cache" --skipDatabaseCompatibilityCheck --skipDatabaseSchemaUpgradeCheck ``` Just like the Octopus Server Windows Container, you will want to provide the following volume mounts. | Name | | | ------------- | ------- | |**/repository**|Package path for the built-in package repository| |**/artifacts**|Path where artifacts are stored| |**/taskLogs**|Path where task logs are stored| |**/cache**|Path where cached files are stored| |**/eventExports**|Path where event audit logs are exported| If you are running Octopus Server directly on Docker, read the Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only) about mounting volumes. You will need to update your Docker compose or Docker run command to point your existing folders to the new volume mounts. If you are running Octopus Server on Kubernetes, you will want to configure [persistent volume mounts](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). :::div{.hint} Due to how paths are stored, you cannot run an Octopus Server Windows Container and Octopus Server Linux Container simultaneously. It has to be all Windows or all Linux. ::: ## Database connection string and master key Just as it is with Octopus Server running on Windows (VM or Container), you will need to provide the database connection string and master key to the Octopus Server Linux Container. The underlying database technology Octopus Deploy relies upon, SQL Server, has not changed. The connection string format is the same, so you shouldn't need to change anything. ## Server thumbprint The certificate backing the server thumbprint is stored in the database. Any tentacles that trust your existing server thumbprint will continue to work as-is when you move to the Octopus Server Linux Container. ## Outage window Migrating to the Octopus Server Linux Container will require an outage window. The steps to perform during the outage window are: 1. Back up the master key. 1. Enable [Maintenance Mode](/docs/administration/managing-infrastructure/maintenance-mode) to prevent anyone from deploying or making changes during the transition. 1. Shut down the existing Octopus Deploy instance. 1. Perform a final file copy to pick up any new files. 1. Start up the Octopus Server Linux Container. 1. Perform some test deployments, verify you can view pre-existing deployment logs and runbook runs. Verify all images appear. 1. Update any Octopus Server DNS entries. 1. Disable Maintenance Mode. ## Further Reading This guide is meant to address the differences you may encounter when switching from running Octopus Server on Windows to the Octopus Server Linux Container. For a deeper dive in how to run the Octopus Server Linux Container please refer to [this documentation](/docs/installation/octopus-server-linux-container). # Self-Managed SQL Server Source: https://octopus.com/docs/installation/sql-database/self-managed-sql-server.md Each Octopus Server node stores project, environment and deployment-related data in a Microsoft SQL Server Database. While it is possible to have Octopus Deploy connect to SQL Server Express running on the same host, it is not something we recommend. If you plan to host the SQL Database on a self-managed SQL Server, we recommend using a SQL Server that is managed by DBAs. :::div{.hint} This document applies to any self-managed SQL Server, regardless of where it is hosted, be it on physical machines in a self-managed data center or on virtual machines in a cloud provider. ::: ## Creating the database The Octopus [installation](/docs/installation/) wizard can create the database for you (our recommended method), during the installation; however, you can also point Octopus to an existing database. Octopus works with both local and remote database servers, but it is worth considering the [performance implications](/docs/administration/managing-infrastructure/performance) before making a decision. If you are using a hosted database service you will need to [create your own database](#create-your-own) and provide Octopus with the connection details. ## Create your own database \{#create-your-own} If you don't want Octopus to automatically create the database for you as part of the installation process, please note the following: 1. You must not share the database with any other application. 1. The default schema must be **dbo**. 1. The database must use a **case-insensitive collation** (a collation with a name containing "\_CI\_"). 1. If you are using **Integrated Authentication** to connect to your database: - The user account installing Octopus must be a member of the **db\_owner** role for that database. - The account the Octopus Deploy Windows Server process runs under (by default, the `Local System` account) must be a member of the **db\_owner** role for that database. 2. If you are using **SQL Authentication** to connect to your database, the SQL user account defined in your connection string must be a member of the **db\_owner** role for that database. ## Changing the database collation Learn more about [changing the database collation](/docs/administration/data/octopus-database/changing-the-collation-of-the-octopus-database) after the initial Octopus installation. ## Database administration and maintenance For more information about maintaining your Octopus database, please read our [database administrators guide](/docs/administration/data/octopus-database). ## High Availability The database is a critical component of Octopus Deploy. If the database is lost or destroyed all your configuration will be lost with it. We highly recommend leveraging a combination of backups and SQL Server's high availability functionality. How the database is made highly available is really up to you; to Octopus, it's just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many [options for high availability with SQL Server](https://msdn.microsoft.com/en-us/library/ms190202.aspx), and [Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering](http://www.brentozar.com/sql/sql-server-failover-cluster/) if you are looking for an introduction and practical guide to setting it up. Octopus High Availability works with: - [SQL Server Failover Clusters](https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/high-availability-solutions-sql-server) - [SQL Server Always-On Availability Groups](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) Make sure the Octopus Server is connecting to the listener which will route database requests to the active SQL Server node and allow for automatic failover. Learn about [connecting to listeners and handling failover](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/listeners-client-connectivity-application-failover). A typical connection string for using a SQL Server AlwaysOn availability group looks like this: ``` Server=tcp:AGListener,1433;Database=Octopus;IntegratedSecurity=SSPI;MultiSubnetFailover=True ``` Since each of the Octopus Server nodes will need access to the database, we recommend creating a special user account in Active Directory with **db\_owner** permission on the Octopus database and using that account as the service account when configuring Octopus. :::div{.warning} Octopus High Availability does not support Database Mirroring. [More information](/docs/administration/data/octopus-database/#highavailability) ::: ## Disaster Recovery With Octopus Deploy's [High Availability](/docs/administration/high-availability) functionality, you connect multiple nodes to the same database and file storage. Octopus Server makes specific assumptions about the latency performance of the database. If you combine that functionality with SQL Server Always-On Availability groups, it seems like a perfect use case for hot/hot. However, due to latency concerns for both the database and file system, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr) for the nodes. The database can be configured in either a hot/cold or hot/warm configuration. ### Hot/cold For hot/cold, you are limited to a single option, [Database Backups](https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/backup-overview-sql-server). After the backup is complete, copy it to the secondary data center. :::div{.warning} When a disaster occurs, any data modified since the last backup will be lost. If you doing a backup every 15 minutes, that means you can lose up to 15 minutes of work. ::: When a disaster occurs, you create the Octopus Deploy database from the most recent backup. Depending on the size of the database this can be accomplished as quickly as a few minutes. However, you'll encounter challenges when you fail back to the primary data center, as you'll need to take a backup of the database in the secondary data center and overwrite what is in the primary data center. Failure to do so will result in a [split-brain scenario](https://en.wikipedia.org/wiki/Split-brain_(computing)). ### Hot/warm You have a couple of options for a hot/warm configuration with SQL Server. - [Transaction Log Shipping](https://learn.microsoft.com/en-us/sql/database-engine/log-shipping/about-log-shipping-sql-server) - Always On High Availability Node [configured for asynchronous-commit](https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups?view=sql-server-ver16#AsyncCommitAvMode). :::div{.warning} When a disaster occurs, any data not synchronized will be lost. Depending on the connection speed, this could be up to a couple of minutes. ::: Fundamentally, both options are the same. They asynchronously transfer database transactions to a secondary data center. When a disaster occurs, you perform the necessary steps as detailed by Microsoft to make the secondary database the primary. There are pros and cons to either approach. And there might be additional licensing costs or limits. Our recommendation is to consult your DBA on which option they prefer. # First Kubernetes deployment (2024.2 and below) Source: https://octopus.com/docs/kubernetes/tutorials/legacy-guide.md 👋 Welcome to Octopus Deploy! This tutorial will help you complete your first deployment to Kubernetes with Octopus Deploy. We’ll walk you through the steps to deploy YAML files to your Kubernetes cluster. :::div{.hint} If you’re using **Octopus 2024.3** or newer, please refer to the updated [Kubernetes First deployment](https://octopus.com/docs/kubernetes/tutorials) guide. ::: ## Before you start To follow this tutorial, you need: * [Octopus Cloud instance](https://octopus.com/free-signup) * Kubernetes cluster * [Docker Hub account](https://hub.docker.com/) * [GitHub account](https://github.com/) #### GitHub repository To start quickly, you can fork our sample GitHub repository, which includes pre-created YAML files. Follow the steps below to fork the repository: 1. Navigate to the **[OctoPetShop](https://github.com/OctopusSamples/OctoPetShop.git)** repository. :::figure ![Sample OctoPetShop GitHub repository](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/octopetshop-repo.png) ::: 2. In the top-right corner of the page, click **FORK**. 3. Provide an **Owner and repository name**, for example `OctoPetShop`. 4. Keep the **Copy the master branch only** checkbox selected. 5. Click **CREATE FORK**. 6. Wait for the process to complete (this should only take a few seconds). Now you're ready, let’s begin deploying your first application to Kubernetes. ## Log in to Octopus 1. Log in to your Octopus instance and click **GET STARTED**. :::figure ![Get started welcome screen](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/get-started.png) ::: ## Add project Projects let you manage software applications and services, each with its deployment process. 2. Give your project a descriptive name and click **SAVE**. :::figure ![Octopus Deploy 'Add New Project' form with fields for project details.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/new-project.png) ::: ## Add environments You'll need an environment to deploy to. Environments are how you organize your infrastructure into groups representing the different stages of your deployment pipeline. For example, Dev, Test, and Production. 3. Select the environments you’d like to create and click **SAVE**. :::figure ![Environment selection options and deployment lifecycle visuals](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/select-environments.png) ::: ## Project questionnaire (optional) You have the option to fill out a short survey. This helps our team learn about the technologies our customers are using, which guides the future direction of Octopus. It should only take about 30 seconds to complete. 4. Click **SUBMIT**, and you'll be taken to your project. :::figure ![Octopus Deploy interface displaying a questionnaire](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/survey.png) ::: ## Create deployment process The next step is creating your deployment process. This is where you define the steps that Octopus uses to deploy your software. 1. Click **CREATE PROCESS** to see the available deployment steps. :::figure ![Deployment process page with a button to create the process.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/create-process.png) ::: ### Configure Deploy Kubernetes YAML step 2. Select the **Kubernetes** filter and then add the **Deploy Kubernetes YAML** step. :::figure ![Kubernetes steps in the Octopus Deploy process editor.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/kubernetes-step.png) ::: #### Step name You can leave this as the *default Deploy Kubernetes YAML*. #### Execution location This step will run once on a worker on behalf of each deployment target. Workers are machines that can execute tasks that don’t need to be run on the Octopus Server or individual deployment targets. You’ll learn more about deployment targets later in this tutorial. #### Worker Pool Worker Pools are groups of Workers. When a task is assigned to a Worker, the task will be executed by one of the Workers in the pools you’ve configured. 3. Select **Runs on a worker from a specific pool**. 4. Select **Hosted Ubuntu** from the dropdown menu. :::figure ![Worker Pool expander with 'Hosted Ubuntu' selected.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/worker-pool.png) ::: #### Target tags \{#on-behalf-of} [Target tags](/docs/infrastructure/deployment-targets/target-tags) (formerly target roles) select specific deployment targets in an environment. This step will run on all deployment targets with the tags you specify in this field. 5. Add a new target tag by typing it into the field. For this example, we'll use `k8s`. :::figure ![Target tag selection expander with 'k8s' tag currently added.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/on-behalf-of.png) ::: After configuring your deployment process, you’ll assign deployment targets to this target tag. #### Container image Next, you configure this step to run inside an execution container. 6. Select **Runs inside a container, on a worker**. :::figure ![Container image expander with 'Runs inside a container, on a worker selected'.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/container-image.png) ::: ### Add container image registry feed For a step running on a Worker, you can select a Docker image to execute the step inside of. Since you don’t have a Docker Container Registry available yet, you need to add one by following the steps below: 1. Click the **External Feeds** link (this will open a new window). 1. Click the **ADD FEED** button and select **Docker Container Registry** from the **Feed Type** dropdown. :::figure ![Library section in Octopus with options to add external feeds.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/external-feeds.png) ::: 1. Provide a name for your feed, for example `Docker Hub`. 1. Enter the feed URL to the public Docker Hub registry, for example `https://index.docker.io`. 1. You can leave the registry path blank for this example. :::figure ![Form to create a Docker container registry external feed.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/create-docker-feed.png) ::: 1. Provide your credentials for Docker Hub. 1. Click **SAVE AND TEST**, and then type `nginx` into the package name field to test your external feed. :::figure ![A search interface in Octopus to test the Docker Hub repository.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/test-docker-feed.png) ::: Close the window and return to configuring the **Deploy Kubernetes YAML** step. #### Container image 7. Click **REFRESH** and select **Docker Hub** as your Container Registry. 1. Copy the latest **Ubuntu-based image** from the help text and paste it into the container image field. :::figure ![Container image expander using the latest Ubuntu-based image.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/container-image-docker.png) ::: #### YAML source This step lets you get your YAML from 3 different sources: * Git repository (default) * Package * Inline script Sourcing from a Git repository can streamline your deployment process by reducing the steps required to get your YAML into Octopus. 9. Select **Git Repository** as your YAML source. :::figure ![YAML source expander with Git repository selected](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/git-repository.png) ::: #### Git repository details 10. Select **Library** and add a new Git credential by clicking the **+** icon. 1. Click the **ADD GIT CREDENTIAL** button. 1. Enter a name for your Git credential. 1. Provide your GitHub username. :::figure ![A section in the library interface that lets users create and manage Git credentials.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/git-credential.png) ::: ### Generate GitHub personal access token Github.com now requires token-based authentication (this excludes GitHub Enterprise Server). Create a personal access token following the steps below or learn more in the [GitHub documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens). 1. Navigate to [github.com](https://github.com) and log in to your account. 1. Click your profile picture in the top right corner. 1. Click **SETTINGS**. 1. Scroll down to the bottom of the page and click **DEVELOPER SETTINGS**. 1. Under **Personal access tokens**, click **FINE-GRAINED TOKENS**. 1. Click **GENERATE NEW TOKEN**. 1. Under **Token name**, enter a name for the token. 1. Under **Expiration**, provide an expiration for the token. 1. Select a Resource Owner. 1. Under **Repository Access**, choose **Only select repositories** and select the **OctoPetShop** repository from the dropdown. 1. Click **REPOSITORY PERMISSIONS**, scroll down to **Contents** and select **Read-only**. 1. Scroll down to the **Overview**, and you should have 2 permissions for one of your repositories (contents and metadata). 1. Click **Generate token** and copy the token. :::figure ![A GitHub settings page where users can manage permissions for fine-grained tokens.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/generate-token.png) ::: #### Git repository details 14. Paste the token into Octopus's personal access token field. 1. **Save** your Git credential and return to the **Deploy Kubernetes YAML** step. 1. Click the refresh icon next to the **Select Git credential** dropdown. 1. Select the Git credential you created earlier. :::figure ![Authentication expander with a Git repository selected from the library.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/completed-git-credential.png) ::: #### Repository URL 18. Enter the full URL to the Git repository where you store the YAML files you want to deploy, for example `https://github.com/your-user/OctoPetShop.git`. :::figure ![Repository URL expander where the user's YAML files are stored.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/repository-url.png) ::: #### Branch settings 19. Provide the default branch you want to use, for example **master** if you’re using the sample repo. #### Paths 20. Enter the relative path(s) to the YAML files you want to deploy to your cluster. If you’re using the sample repo, the path will be `k8s/*.yaml`. :::figure ![The Paths expander that lets users specify the paths to their YAML files using glob patterns.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/paths.png) ::: #### Kubernetes object status check This feature gives you live status updates during deployment for all the Kubernetes objects you're deploying. 21. Keep the default **Check that Kubernetes objects are running successfully** option selected with the default timeout of **180** seconds. :::figure ![Kubernetes object status check expander with the default option and timeout selected.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/k8s-object-status-check.png) ::: #### Structured configuration variables This is an advanced feature that you can skip for this tutorial. Learn more about [structured configuration variables in our docs](https://octopus.com/docs/projects/steps/configuration-features/structured-configuration-variables-feature). #### Referenced packages This is an advanced feature that you can skip for this tutorial. Learn more about [references packages in our docs](https://octopus.com/docs/deployments/custom-scripts/run-a-script-step#referencing-packages). #### Namespace 22. Specify the namespace in the cluster where you want to deploy your YAML files, for example `demo-namespace`. If the namespace doesn’t exist yet, Octopus will create it during the deployment. #### Conditions You can set [conditions](https://octopus.com/docs/projects/steps/conditions) for greater control over how each step in your deployment process gets executed. You can skip all the fields under this section for your first deployment. **Save** your step and then move on to the next section to add your Kubernetes deployment target. ## Add a deployment target With Octopus Deploy, you can deploy software to: * Kubernetes clusters * Microsoft Azure * AWS * Cloud regions * Windows servers * Linux servers * Offline package drops Wherever you’re deploying your software, these machines and services are known as your deployment targets. 1. Navigate to **Infrastructure** ➜ **Deployment Targets**, and click **ADD DEPLOYMENT TARGET**. :::figure ![Deployment targets page with no targets added.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/deployment-targets.png) ::: 2. Select **KUBERNETES CLUSTER** and click **ADD** on the Kubernetes Cluster card. :::figure ![A list of deployment target types with the Kubernetes cluster selected.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/add-k8s-target.png) ::: #### Display name 3. Enter `k8s-demo` in the **Display Name** field. #### Environments 4. Select **Development**, **Staging**, and **Production** from the dropdown list. #### Target tags \{#target-roles} 5. Type in the same [target tag](/docs/infrastructure/deployment-targets/target-tags) you provided while configuring the **Deploy Kubernetes YAML** step, for example `k8s`. The target tag won’t be available to select from the dropdown list yet, because it gets created during this step. :::figure ![User interface for setting up a Kubernetes Cluster deployment target.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/create-k8s-cluster.png) ::: #### Authentication Octopus provides multiple methods for authenticating your Kubernetes cluster depending on your setup, including: | **Service** | **Octopus Authentication Method** | **Notes** | |-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | AKS | [Azure Service Principal](https://octopus.com/docs/infrastructure/accounts/azure) | The Azure Service Principal is only used with AKS clusters. To log into ACS or ACS-Engine clusters, you must use standard Kubernetes credentials like certificates or service account tokens.

Learn more in the [Azure docs](https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-identity). | | GKE | [Google Cloud Account](https://octopus.com/docs/infrastructure/accounts/google-cloud) | When using a GKE cluster, Google Cloud accounts let you authenticate using a Google Cloud IAM service account.

Learn more in the [GKE docs](https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication). | | EKS | [AWS Account](https://octopus.com/docs/infrastructure/accounts/aws) | When using an EKS cluster, AWS accounts let you use IAM accounts and roles.

Learn more in the [AWS docs](https://docs.aws.amazon.com/eks/latest/userguide/cluster-auth.html). | | Other | [Tokens](https://octopus.com/docs/infrastructure/accounts/tokens)
[Username and password](https://octopus.com/docs/infrastructure/accounts/username-and-password)
[Client certificate](https://octopus.com/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api#add-a-kubernetes-target) | Learn more in the [Kubernetes cluster docs](https://octopus.com/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api#add-a-kubernetes-target). | Here are brief instructions on how to configure your cluster authentication in Octopus, since it will depend on your specific situation: 1. Select the appropriate authentication method from the list. :::figure ![Authentication methods for a Kubernetes Cluster deployment with various account options.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/target-authentication-methods.png) ::: 2. Add a new account with the authentication details needed to access your cluster (more detailed instructions are linked in the table above). :::figure ![Create Account page with form in Octopus Deploy.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/create-account.png) ::: 3. Complete the target authentication configuration fields like cluster name, resource group, etc. :::figure ![Kubernetes authentication details, including Azure Service Principal and cluster information.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/target-authentication.png) ::: Need more details on how to configure various authentication methods? Read the [Kubernetes cluster docs](https://octopus.com/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api#add-a-kubernetes-target). #### Kubernetes namespace 6. Specify the namespace for this deployment target, for example `default`. #### Worker Pool 7. Select **Hosted Ubuntu** as the default Worker Pool. #### Health check container image 8. Select **Runs inside a container, on a Worker**. 1. Select **Docker Hub** as the container registry. 1. Copy the **Ubuntu-based image** and paste it into the container image field. 1. **SAVE** your deployment target. :::figure ![Health check container image expander with the latest Ubuntu-based image.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/health-check-container-image.png) ::: #### Health check Octopus runs health checks on deployment targets and Workers to ensure they're available and running the latest version of Calamari. This process may take a few minutes since it’s acquiring the Worker and it needs to download the Worker Tools image. 1. After saving, navigate to **Connectivity** in the left sidebar menu. 1. Click the **CHECK HEALTH** button. :::figure ![Deployment target connectivity status page with unknown state.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/health-check-connectivity.png) ::: You can create and deploy a release now that you have a healthy deployment target. :::figure ![Logs indicating a healthy deployment target.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/healthy-target.png) ::: ## Release and deploy ### Create release A release is a snapshot of the deployment process and the associated assets (Git resources, variables, etc.) as they exist when the release is created. 1. Navigate to **Projects** in the top navigation and select your **First K8s deployment** project. 1. Click the **CREATE RELEASE** button. :::figure ![Deployment overview page with no deployments.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/deployment-overview.png) ::: You’ll see a summary of the Git resources you provided in the **Deploy Kubernetes YAML** step. :::figure ![Release summary showing Git resources](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/release-summary.png) ::: 3. Click **SAVE**. ### Execute deployment When you created this project, you selected the default lifecycle (Development ➜ Staging ➜ Production). Lifecycles determine which environments the project can be deployed to, and the promotion rules between those environments. 1. Click **DEPLOY TO DEVELOPMENT** to deploy to the development environment associated with your cluster. 1. Review the preview summary and when you’re ready, click **DEPLOY**. Your first deployment may take slightly longer because your Docker image won’t be cached yet. 3. Navigate to the **KUBERNETES OBJECT STATUS** tab to see the live status of your Kubernetes objects as the deployment progresses. :::figure ![Kubernetes Object Status dashboard showing a successful deployment.](/docs/img/getting-started/first-kubernetes-deployment/legacy-guide/images/deployment-success.png) ::: You’ve successfully completed your first deployment to Kubernetes! 🎉 As you continue to explore Octopus Deploy, consider diving deeper into powerful features like [variables](https://octopus.com/docs/projects/variables), joining our [Slack community](http://octopususergroup.slack.com), or checking out our other tutorials to expand your knowledge. ## More Kubernetes resources * [Deploy with the Kustomize step](https://octopus.com/docs/deployments/kubernetes/kustomize) * [Deploy a Helm chart](https://octopus.com/docs/deployments/kubernetes/helm-update) * [Using variables for Kubernetes without breaking YAML](https://octopus.com/blog/structured-variables-raw-kubernetes-yaml) # Deployment failure analyzer Source: https://octopus.com/docs/octopus-ai/assistant/deployment-failure-analyzer.md Every failed deployment is a blocker for DevOps teams. You can use the Octopus AI Assistant to analyze failed deployments, reducing the time you spend troubleshooting by providing immediate, context-aware analysis and remediation steps based on your specific deployment scenario. When a deployment fails, the analyzer gathers context about the deployment including logs, process configuration, and script content, and provides actionable suggestions to get your team unblocked faster. ## How the deployment failure analyzer works The Deployment Failure Analyzer captures detailed information about deployments, including: - Deployment logs and error messages - Deployment process configuration - Script content from deployment steps - Build information and artifacts - Environment and target details This context is analyzed by the Octopus AI Assistant to identify the root cause of the failure and provide specific suggestions for resolution. ## Using the deployment failure analyzer When a deployment fails, you can launch the Octopus AI Assistant from the deployment page. The analyzer will present a suggested prompt for analyzing the failed deployment: ```text Help me understand why the deployment failed. If the deployment didn't fail, say so. Provide suggestions for resolving the issue. ``` The Octopus AI Assistant will analyze the deployment context and provide: 1. **Reason for failure** - The specific step and error that caused the deployment to fail 2. **What happened** - A detailed breakdown of the deployment process and where it went wrong 3. **Suggestions for resolving the issue** - Actionable remediation steps with specific commands and configuration changes 4. **Next steps** - Recommended actions to investigate further and prevent future failures ## Example analysis Below is a basic example of how the Deployment Failure Analyzer works in practice. The analyzer identified that an Azure Resource Group could not be found during deployment and provided troubleshooting guidance, including verifying the resource group exists, checking Azure account permissions, looking for typos in the configuration, and enabling step retries for intermittent issues. ![Deployment failure analysis example](/docs/img/octopus-ai-assistant/deployment-failure-analyzer-example.png) ## Adding business logic using custom prompts For organizations with specific internal processes and troubleshooting procedures, you can enhance the Deployment Failure Analyzer with custom business logic using [custom prompts](/docs/octopus-ai/assistant/custom-prompts). Custom prompts are defined as variables in Library Variable Sets within Octopus Deploy, allowing you to embed organization-specific guidance and next steps directly into the failure analysis responses. Custom prompts work by combining a user-facing prompt (`PageName[#].Prompt`) with an optional system prompt (`PageName[#].SystemPrompt`) that contains your business logic. The `.Prompt` variable defines what users see and interact with, while the `.SystemPrompt` variable provides behind-the-scenes instructions that guide the AI's analysis without being visible to users. ### Example: Missing Azure resource group Here's an example of how to configure custom business logic for deployment failures where an Azure resource group cannot be found. While this is a basic example, it shows how you can embed custom business logic for known issues, instruct the LLM on how to respond, and whether to also include general troubleshooting steps from the LLM or not. Custom logic is defined by configuring variables in a variable set named `OctoAI Prompts` in Octopus Deploy. | Variable name | Variable value | |-----------|-------------| | Project.Deployment[0].Prompt | Why did the deployment fail? If the deployment didn't fail, say so. Provide suggestions for resolving the issue. | | Project.Deployment[0].SystemPrompt | If the logs indicate that an Azure resource group could not be located, find the team responsible for the project in the project descriptions and return the instruction to create a support ticket using the Slack workflow in the team slack channel. Don't provide general troubleshooting steps in the response.| In this example, when the analyzer detects an issue related to a missing Azure Resource Group in the deployment logs, it will: 1. Look up the responsible team from the project description 2. Provide specific instructions to create a support ticket via the team's Slack workflow 3. Direct users to the appropriate team channel rather than providing generic troubleshooting steps ![Deployment failure analysis example with custom prompt](/docs/img/octopus-ai-assistant/deployment-failure-analyzer-custom-prompt-example.png) This approach ensures users get immediate, actionable guidance that follows your organization's established support processes, reducing resolution time and ensuring consistency across teams. For detailed instructions on setting up custom prompts, including variable naming conventions and supported pages, see the [custom prompts documentation](/docs/octopus-ai/assistant/custom-prompts). # octopus account azure-oidc list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-azure-oidc-list.md List Azure OpenID Connect accounts in Octopus Deploy ```text Usage: octopus account azure-oidc list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account azure-oidc list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Accounts Source: https://octopus.com/docs/octopus-rest-api/examples/accounts.md [Accounts](https://oc.to/OnboardingAccountsLearnMore) help you to centralize account details used during your deployments, including things like username/password, tokens, Azure and AWS credentials and SSH key pairs. Out-of-the-box, Octopus provides different types of accounts to help manage your infrastructure: - [Azure account](/docs/infrastructure/accounts/azure). - [AWS account](/docs/infrastructure/accounts/aws). - [Google Cloud account](/docs/infrastructure/accounts/google-cloud). - [SSH Key Pair](/docs/infrastructure/accounts/ssh-key-pair). - [Username/Password](/docs/infrastructure/accounts/username-and-password). - [Tokens](/docs/infrastructure/accounts/tokens). You can use the REST API to create and manage accounts in Octopus. Typical tasks can include: - [Create an AWS account](/docs/octopus-rest-api/examples/accounts/create-aws-account) - [Create an Azure service principal](/docs/octopus-rest-api/examples/accounts/create-azure-service-principal) - [Create a Google Cloud account](/docs/octopus-rest-api/examples/accounts/create-gcp-account) # Getting started with the Octopus REST API Source: https://octopus.com/docs/octopus-rest-api/getting-started.md ## API clients Octopus provides API clients for popular programming languages and runtime environments. You can access the source code for these clients on GitHub: - [Go API Client for Octopus Deploy](https://github.com/OctopusDeploy/go-octopusdeploy) - [.NET C# API Client for Octopus Deploy](https://github.com/OctopusDeploy/OctopusClients) - [TypeScript API Client for Octopus Deploy](https://github.com/OctopusDeploy/api-client.ts) Code snippets using these clients for operations in the Octopus REST API are available in our [API examples](/docs/octopus-rest-api/examples) documentation. ## REST API authentication \{#authentication} The Octopus Deploy API is available at: ``` https:///api ``` Replace `` with the URL that you host your Octopus instance on. The API supports 2 methods of authentication. ### Creating an API Key You can get your API key from your profile page on the Octopus Web Portal. After you have a key, you can provide it to the API in the following ways: 1. Through the `X-Octopus-ApiKey` HTTP header with all requests. This is the preferred approach. 1. As an `apikey` query string parameter with all requests. You should only be used for simple requests. :::div{.hint} Learn more about [how to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key). ::: ### OpenID Connect OpenID Connect is a set of identity specifications that build on OAuth 2.0 to let software systems connect in a way that promotes security best practices. When using OIDC, Octopus validates an identity token from a trusted external system using [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Octopus then issues a short-lived access token that you can use to interact with the Octopus API. Some of the benefits of using OIDC in Octopus include: - You don't need to provision API keys and store them in external systems. This reduces the risk of unauthorized access to the Octopus API from exposed keys. - Administrators don't need to rotate API keys manually. This reduces the risk of disruption when updating to newer keys in external systems. - Access tokens issued by Octopus are short-lived. This reduces the risk of unauthorized access to the Octopus API. - Access tokens are only issued for requests from trusted external systems. This allows for controlled access to service accounts and promotes the principle of least access. We support any issuer that can generate signed OIDC tokens that can be validated anonymously. However, we provide built-in support for GitHub Actions with the [OctopusDeploy/login](https://github.com/OctopusDeploy/login) action. For more information see [Using OpenId Connect with the Octopus API](https://octopus.com/docs/octopus-rest-api/openid-connect). ## REST API Swagger documentation \{#api-swagger-docs} Octopus includes the default Swagger UI for displaying the API documentation in a nice, human, readable way. To browse that UI just open your browser and go to `https:///swaggerui/`. The original Non-Swagger API page is still available and you can access it via `https:///api/`. :::figure ![Server API](/docs/img/octopus-rest-api/images/server-api.png) ::: You can view the API through the Octopus Demo server at [demo.octopus.app/swaggerui/index.html](https://demo.octopus.app/swaggerui/index.html). ## REST API links \{#api-links} All resources returned by the REST API contain links to other resources. The idea is that instead of memorizing or hard-coding URLs when using the API, you should start with the root API resource and use links to navigate. For example, a `GET` request to `/api` returns a resource that looks like: ```json { "Application": "Octopus Deploy", "Version": "2022.1.2386", "ApiVersion": "3.0.0", "InstallationId": "9f155416-5d9e-4e19-ba58-b710d4edf336", "Links": { "Self": "/api", "Accounts": "/api/Spaces-1/accounts{/id}{?skip,take,ids,partialName,accountType}", "Environments": "/api/Spaces-1/environments{/id}{?name,skip,ids,take,partialName}", "Machines": "/api/Spaces-1/machines{/id}{?skip,take,name,ids,partialName,roles,isDisabled,healthStatuses,commStyles,tenantIds,tenantTags,environmentIds,thumbprint,deploymentId,shellNames,deploymentTargetTypes}", "Projects": "/api/Spaces-1/projects{/id}{?name,skip,ids,clone,take,partialName,clonedFromProjectId}", "RunbookProcesses": "/api/Spaces-1/runbookProcesses{/id}{?skip,take,ids}", "RunbookRuns": "/api/Spaces-1/runbookRuns{/id}{?skip,take,ids,projects,environments,tenants,runbooks,taskState,partialName}", "Runbooks": "/api/Spaces-1/runbooks{/id}{?skip,take,ids,partialName,clone,projectIds}", "RunbookSnapshots": "/api/Spaces-1/runbookSnapshots{/id}{?skip,take,ids,publish}", "Feeds": "/api/feeds{/id}{?skip,take,ids,partialName,feedType,name}", "Tasks": "/api/tasks{/id}{?skip,active,environment,tenant,runbook,project,name,node,running,states,hasPendingInterruptions,hasWarningsOrErrors,take,ids,partialName,spaces,includeSystem,description,fromCompletedDate,toCompletedDate,fromQueueDate,toQueueDate,fromStartDate,toStartDate}", "Variables": "/api/Spaces-1/variables{/id}{?ids}", "Web": "/app" } } ``` :::div{.hint} Note: the `Links` collection example above has been significantly reduced in size for demonstration purposes. ::: You can follow the links in the result to navigate around the API. For example, by following the `Projects` link, you'll find a list of the projects on your Octopus server. Since the format and structure of links may change, it's essential that clients avoid hardcoding URL's to resources, and instead rely on starting at `/api` and navigating from there. ### URI templates Some links (mainly to collections) use URI templates as defined in [RFC 6570](http://tools.ietf.org/html/rfc6570). If in doubt, a client should assume that any link is a URI template. ### Collections Collections of resources also include links. For example, following the `Environments` link above will give you a list of environments. ```json { "ItemType": "Environment", "TotalResults": 20, "ItemsPerPage": 10, "NumberOfPages": 2, "LastPageNumber": 1, "Items": [ // ... a list of environments ... ], "Links": { "Self": "/api/Spaces-1/environments?skip=0&take=10", "Template": "/api/Spaces-1/environments{?skip,ids,take,partialName}", "Page.All": "/api/Spaces-1/environments?skip=0&take=2147483647", "Page.Next": "/api/Spaces-1/environments?skip=10&take=10", "Page.Current": "/api/Spaces-1/environments?skip=0&take=10" } } ``` The links at the bottom of the resource allow you to traverse the pages of results. Again, instead of hard-coding query string parameters, you can look for a `Page.Next` link and follow that instead. ## REST API and Spaces \{#api-and-spaces} If you are using spaces, you need to include the `SpaceID` in your API calls. If you do not include the `SpaceID`, your API calls will automatically use the default space. ## REST API code samples \{#api-samples} Code snippet samples for various operations in the Octopus REST API are available both in our [API examples](/docs/octopus-rest-api/examples) and on the [OctopusDeploy-API GitHub repository](https://github.com/OctopusDeploy/OctopusDeploy-Api) # How to Create an API Key Source: https://octopus.com/docs/octopus-rest-api/how-to-create-an-api-key.md API keys allow you to access the Octopus Deploy [REST API](/docs/octopus-rest-api) and perform tasks such as creating and deploying releases. API keys can be saved in scripts or external tools, without having to use your username and password. Each user and service account can have multiple API keys. See the [Service Accounts docs](/docs/security/users-and-teams/service-accounts) for information about creating service accounts. ## Creating an API Key [Getting Started - API Keys](https://www.youtube.com/watch?v=f3-vRjpB0cE) You can create API keys by performing the following steps: 1. Log into the Octopus Web Portal, click your profile image and select **Profile**. 1. Click **My API Keys**. 1. Click **New API key**, state the purpose of the API key and click **Generate new**. 1. Copy the new API key to your clipboard. :::div{.warning} **Write Your Key Down** After you generate an API key, it cannot be retrieved from the Octopus Web Portal again, we store only a one-way hash of the API key. If you want to use the API key again, you need to store it in a secure place such as a password manager. Read about [why we hash API keys](https://octopus.com/blog/hashing-api-keys). ::: ## Setting an expiry date :::div{.hint} The ability to set an expiry date on new API keys was added in Octopus Deploy **2020.6**. ::: By default, new API keys are valid for 180 days from the point they are created. When creating an API key in the Octopus Web Portal, you can choose from a preset list of offsets from the current date, or select a custom date. Keys will expire at the end of the selected day. When using the Octopus REST API to create a key, you can set the expiry date to your preferred date and time, including time zone offset. There are three restrictions on the expiry date: - It cannot be in the past. - It cannot be after the expiry date of the key being used to create it (when using the REST API). - **Octopus Deploy 2025.4 and newer:** It cannot exceed the server's configured maximum expiry period (defaults to 366 days, configurable) ## Configure API keys for expiry notifications [Octopus Subscriptions](/docs/administration/managing-infrastructure/subscriptions) can be used to configure notifications when API keys are close to expiry or have expired. There is an "API key expiry events" event-group and three events: - API key expiry 20-day warning. - API key expiry 10-day warning. - API key expired. :::div{.info} The background task which raises the api-key-expiry events runs: - 10 minutes after the Octopus Server service starts - Every 4 hours ::: ## Configuring API Key default and maximum expiry durations :::div{.hint} The ability to control the default and maximum API key expiry was added in Octopus Deploy **2025.4**. The ability to create keys that never expire was removed in this version. Versions 2025.3 and below will use a default expiry of 180 days and have no maximum. ::: Octopus administrators can change the maximum API key expiry from 366 days to a value of their choice, up to 1096 days. Octopus administrators can change the default API key expiry from 180 days to a value of their choice. The default period must be less than or equal to the maximum. To change these values in the Octopus Web Portal: 1. Navigate to **Configuration ➜ Settings** and click **Authentication**. 1. Expand the sections for **API Key default expiry (days)** and **API Key maximum expiry (days)** and alter the values. 1. Click Save. ## Disabling API key creation for user accounts :::div{.hint} The ability to disable API key creation for user accounts was added in Octopus Deploy **2023.2**. ::: Octopus administrators can disable the creation of API keys for regular user accounts. Existing API keys will continue to function, and new API keys can still be created for [Service Accounts](/docs/security/users-and-teams/service-accounts). To change the value in the Octopus Web Portal: 1. Navigate to **Configuration ➜ Settings** and click **Authentication**. 1. Expand the section for **User API Keys** and alter the value. 1. Click Save. # Getting started Source: https://octopus.com/docs/octopus-rest-api/octopus.client/getting-started.md There are two ways to use the Octopus Client library: 1. The `Octopus.Server.Client` package is a standard NuGet package useful for normal applications. 1. The `Octopus.Client` package is a NuGet package containing an ILMerged single `Octopus.Client.dll` comprising `Octopus.Server.Client.dll` (above) and all of its dependencies. This is useful for scripting where importing a single .NET assembly is preferable. :::div{.hint} **Usage guidance** - Unless you have a specific need to use the ILMerged `Octopus.Client`, we recommend using the `Octopus.Server.Client` package. In both cases, the calling conventions are identical - the former is just an ILMerged version of the latter. - If you're intending to use the contract DTO classes from the library with your own serialization mechanism, you'll definitely want to use `Octopus.Server.Client`. The ILMerged client also merges in `Newtonsoft.Json` so your own serializer won't recognize any of the serialization attributes. ::: ## Using Octopus.Client from installation folder {#using-octopus-client-from-install-folder} Octopus Server and Tentacle both ship with a version of `Octopus.Client.dll` in the installation directory. Avoid using this in your scripts as this is considered an implementation detail of those products. As such it is subject to change at any time, and not guaranteed to work with your version of Octopus Server. ## Getting started with the Octopus Client in a .NET application ### Package installation To use from C#, first install the package via the NuGet Package Manager:
Package Management Console ```powershell Install-Package Octopus.Server.Client ```
.NET CLI ```bash dotnet add package Octopus.Server.Client ```
### Creating and using the client (Synchronous API) \{#Octopus.Client-SynchronousAPI} The easiest way to use the client is via the `OctopusRepository` helper: ```csharp var server = "https://your-octopus-url"; var apiKey = "API-YOUR-KEY"; // Get this from your 'profile' page in the Octopus Web Portal var endpoint = new OctopusServerEndpoint(server, apiKey); var repository = new OctopusRepository(endpoint); ``` API key authentication is recommended, but you can use username/password for authentication with the `SignIn()` method instead: ```csharp repository.Users.SignIn(new LoginCommand { Username = "me", Password = "secret" }); ``` ### Creating and using the client (Asynchronous API) \{#Octopus.Client-AsynchronousAPI(Octopus.Client4.0+)} The easiest way to use the client is via the `OctopusAsyncClient`: ```csharp var server = "https://your-octopus-url"; var apiKey = "API-YOUR-KEY"; // Get this from your 'profile' page in the Octopus Web Portal var endpoint = new OctopusServerEndpoint(server, apiKey); using (var client = await OctopusAsyncClient.Create(endpoint)) { } ``` If you don't want to provide an API key for authentication, you can leave it out and authenticate with the `SignIn()` method instead: ```csharp await client.Repository.Users.SignIn(new LoginCommand { Username = "me", Password = "secret" }); ``` ## Getting started with the Octopus Client package in a PowerShell script ### Package installation To get started with the Octopus Client library from PowerShell, use the `Install-Package` command from the Microsoft [PackageManagement](https://docs.microsoft.com/en-us/powershell/module/packagemanagement) module:
Windows PowerShell ```powershell [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 Install-Package Octopus.Client -source https://www.nuget.org/api/v2 -SkipDependencies $path = Join-Path (Get-Item ((Get-Package Octopus.Client).source)).Directory.FullName "lib/net462/Octopus.Client.dll" Add-Type -Path $path ```
PowerShell Core ```powershell Install-Package Octopus.Client -source https://www.nuget.org/api/v2 -SkipDependencies $path = Join-Path (Get-Item ((Get-Package Octopus.Client).source)).Directory.FullName "lib/netstandard2.0/Octopus.Client.dll" Add-Type -Path $path ```
:::div{.hint} Note: The `PowerShell Core` example above needs the path to be slightly different than the one for `Windows PowerShell`. ::: If you're referencing an older version of the .NET Standard version of Octopus.Client, you may find you also need to add a reference to `NewtonSoft.Json.dll` and `Octodiff`: ```powershell # Using `Install-Package` $path = Join-Path (Get-Item ((Get-Package NewtonSoft.Json).source)).Directory.FullName "lib/netstandard2.0/NewtonSoft.Json.dll" Add-Type -Path $path $path = Join-Path (Get-Item ((Get-Package Octodiff).source)).Directory.FullName "lib/netstandard2.0/Octodiff.dll" Add-Type -Path $path ``` ### Creating an instance of the client ```powershell Add-Type -Path 'C:\PathTo\Octopus.Client.dll' $server = "https://your-octopus-url" $apiKey = "API-YOUR-KEY"; # Get this from your 'profile' page in the Octopus Web Portal $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($server, $apiKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) ``` ### Using the synchronous API ```powershell $loginCreds = New-Object Octopus.Client.Model.LoginCommand $loginCreds.Username = "me" $loginCreds.Password = "secret" $repository.Users.SignIn($loginCreds) ``` # Admin Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/admin.md Use the admin command to reset admin user passwords, re-enable them, and ensure they are in the admin group. **Admin options** ``` Usage: octopus.server admin [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --wait=VALUE Milliseconds to wait --username, --user=VALUE The username of the administrator to create/modify --email=VALUE The email of the administrator to create/modify --password=VALUE The password to set for the administrator --apiKey=VALUE The API Key to set for the administrator. If this is set and no password is provided then a service account user will be created. If this is set and a password is also set then a standard user will be created. --externalGroup=VALUE The partial name of an Active Directory group to add to the administrators team --externalGroupId, --externalRoleId=VALUE The id of an external (e.g. AzureAD, Okta) group/role to add to the administrators team --externalGroupDescription, --externalRoleDescription=VALUE The description of an external (e.g. AzureAD, Okta) group/role to add to the administrators team --skipDatabaseCompatibilityCheck Skips the database compatibility check --skipDatabaseSchemaUpgradeCheck Skips the database schema upgrade checks. Use with caution Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example will add or update the administrator account with the username of `OctoAdmin`: ``` octopus.server admin --username="OctoAdmin" --password="My$uper$cr3tP@ssword!" --email="admin@octopus.com" ``` # AppVeyor integration Source: https://octopus.com/docs/packaging-applications/build-servers/appveyor.md [AppVeyor](https://ci.appveyor.com) is a cloud-based continuous integration system that integrates natively with your source control and allows CI configuration files to live alongside your projects. You can use AppVeyor to automatically package your applications from your source control repository, push the packaged application to the [built-in Octopus repository](/docs/packaging-applications/package-repositories/built-in-repository), and create and deploy releases. ## Configuring an AppVeyor project for Octopus To use AppVeyor with a source code repository, you'll need to create and configure a project. See the [AppVeyor docs](https://www.appveyor.com/docs/) for instructions. ## Configure the build Once you've added a project with a repository, you need to configure the build. In the settings for your AppVeyor project, navigate to the **build** page and check the check-box for the **Package Web Applications for Octopus deployment** option. AppVeyor will run `octo pack` after MSBuild has finished its `publish` command. Because AppVeyor is running the `publish` command, some of the files that [OctoPack](/docs/packaging-applications/create-packages/octopack) would normally include might not be included by default, this includes the `web.*.config` files. To ensure these files are included in the package make sure they are configured to `Copy to Output Directory` in Visual Studio. In the **Before build script** section add `nuget restore` as AppVeyor will not perform this operation by default. :::figure ![AppVeyor MSBuild Build](/docs/img/packaging-applications/build-servers/appveyor/images/appveyor_build_msbuild.png) ::: ### AppVeyor environment variables The following environment variables are available and can be configured on the **Environment** page of your project's settings. | Variable name | Description| | ------------- | ------- | | OCTOPUS_PACKAGE_VERSION | Overrides the version in the package name. (default AppVeyor build version)| | OCTOPUS_PACKAGE_NUGET | Overrides the package type. (default nupkg) | | OCTOPUS_PACKAGE_ADVANCED | [Additional arguments](/docs/packaging-applications/create-packages/octopus-cli) to pass to `octo pack` | ### Non-MSbuild projects AppVeyor have included the Octopus CLI (`octo`) into the base Windows build VM and is available via the command line. If you're running a project that is _not_ using msbuild, you can manually invoke the `octo pack` command during the build phase, by navigating to **Build ➜ Script** and adding you command to the build script section. For instance: ```bash npm Build octo pack --outFolder ./bin --id=MyApp ``` Next, flag the generated archive as an artifact of the build and should be made available for subsequence steps. On the **artifact** page of your project's settings add the path to the artifact, for instance: ```bash ./bin/* ``` You can use a wildcard to pick up the dynamically generated package. :::figure ![AppVeyor npm Build](/docs/img/packaging-applications/build-servers/appveyor/images/appveyor_artifact.png) ::: ### Push to Octopus Next, go to the **Deployment** page in your project's settings and click **Add deployment** and from the **Deployment providers** select **Octopus Deploy**. Enter the URL where the Octopus Server can be reached, and add an [API key](/docs/octopus-rest-api/how-to-create-an-api-key). :::figure ![AppVeyor Deploy](/docs/img/packaging-applications/build-servers/appveyor/images/appveyor_deploy.png) ::: When you define an "Octopus package" in AppVeyor through the **Package Web Applications for Octopus Deployment** flag or the **Artifacts** page, then AppVeyor will automatically select that package to push to your Octopus Server. Set the **Artifact(s)** field on the **Deployment** page if you have manually created an archive. If your Octopus Deploy project doesn't make use of [release creation triggers](/docs/projects/project-triggers/built-in-package-repository-triggers) or automatic lifecycle progression you can optionally trigger these actions from within the AppVeyor configuration providing the appropriate values in the inputs provided. Unless overridden, the AppVeyor project name will be used in place of the Octopus project name when creating a release. ## Build configuration in code AppVeyor provides another mechanism for providing the above configuration information and this is via an [appveyor.yml](https://www.appveyor.com/docs/appveyor-yml/) file contained in the repository source code. For the above configuration the YAML file is as simple as ```yaml version: 1.0.{build} before_build: - cmd: nuget restore build: publish_wap_octopus: true verbosity: minimal deploy: - provider: Octopus push_packages: true create_release: true deploy_release: false server: https://myoctopus.acme.corp api_key: secure: YOUR-API-KEY project: AcmeWeb deploy_wait: false ``` Storing the configuration with the source code is a great way to version the build process, however, it is worth noting that when AppVeyor detects an **appveyor.yml** file in the source code, any configuration in the portal will be ignored. Although you can continue to update the configuration via the portal, this will have no effect unless you remove the YAML file or configure the project to explicitly ignore it. ## Learn more - [AppVeyor's docs](https://www.appveyor.com/docs/) # Create packages Source: https://octopus.com/docs/packaging-applications/create-packages.md There are a variety of tools you can use to package your applications, and as long as you can create [supported packages](/docs/packaging-applications/#supported-formats) you can deploy your applications with Octopus Deploy. We've created the following tools to help you package your applications: - The [Octopus CLI](/docs/packaging-applications/create-packages/octopus-cli) to create Zip Archives and NuGet packages for **.NET Core** apps and full **.NET framework** applications. - [OctoPack](/docs/packaging-applications/create-packages/octopack) to create NuGet packages for **ASP.NET** apps (.NET Framework) and **Windows Services** (.NET Framework). - The [TeamCity plugin](/docs/packaging-applications/build-servers/teamcity). - The [Azure DevOps plugin](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension). In addition to these tools, you can use other tools to create your packages, for instance, you might use the following: - The built-in tools for [TeamCity](https://blog.jetbrains.com/teamcity/2010/02/artifact-packaging-with-teamcity/). - [NuGet.exe](https://docs.microsoft.com/en-us/nuget/tools/nuget-exe-cli-reference) to create NuGet packages. - [NuGet Package Explorer](https://github.com/NuGetPackageExplorer/NuGetPackageExplorer). - [Grunt, gulp, or octojs](/docs/deployments/node-js/node-on-linux/#create-and-push-node.js-project) for JavaScript apps. # Include BuildEvent files Source: https://octopus.com/docs/packaging-applications/create-packages/octopack/octopack-to-include-buildevent-files.md This page gives an example of extending OctoPack for when you have a PostBuild event in Visual Studio and want to include files that are not specifically part of your build, such as files that have been moved using Xcopy. This example demonstrates the use of a PostBuild event in Visual Studio and the OctoPack option `OctoPackEnforceAddingFiles`. I created a Post-Build Event using the Visual Studio Build Events feature. It uses Xcopy to move files from a path to my solution: :::figure ![Post-build event](/docs/img/packaging-applications/create-packages/octopack/images/post-build-event.png) ::: However, when I use OctoPack to package my solution on build my moved files are not included in the build: :::figure ![Sample package without files](/docs/img/packaging-applications/create-packages/octopack/images/sample-package-without-files.png) ::: This is resolved by creating a NuSpec file, and creating a files tag to tell OctoPack to take my moved files, and put them inside a folder called `bin\test` in the package: :::figure ![](/docs/img/packaging-applications/create-packages/octopack/images/nuspec-file.png) ::: It is important to note here that for OctoPack to find and use a NuSpec file, it must be named the same as your project as seen above. For instance, in our example, the project is called `OctoFX.TradingWebsite` so our NuSpec file must be called `OctoFx.TradingWebsite.nuspec`. To ensure I don't just get the files defined within the NeSpec file, I add `/p:OctoPackEnforceAddingFiles=true`, to tell OctoPack to also add the files it would normally add while building as well as those targeted by my files tag in the NuSpec file. ```powershell F:\Workspace\OctoFX\source>msbuild OctoFX.sln /t:Build /p:RunOctoPack=true /p:OctoPackPackageVersion=1.0.0.7 /p:OctoPackEnforceAddingFiles=true ``` Now my test folder and files, as well as my build files, are included in the package. ## Next - [Packaging applications](/docs/packaging-applications) - [Use the Octopus CLI to create packages](/docs/packaging-applications/create-packages/octopus-cli) - [Troubleshooting OctoPack](/docs/packaging-applications/create-packages/octopack/troubleshooting-octopack) - [Package deployments](/docs/deployments/packages) # Versioning schemes Source: https://octopus.com/docs/packaging-applications/create-packages/versioning.md The [Package ID](/docs/packaging-applications/#package-id), version number, and [package format](/docs/packaging-applications/#support-formats) uniquely identify your packages, so it's important to choose the right versioning scheme, but it can be a tricky balance between pragmatism and strictness. This page should help you understand how Octopus Deploy handles versions in [packages](/docs/packaging-applications/#supported-formats), [releases](/docs/releases/), and [channels](/docs/releases/channels), which will help you design a versioning scheme that suits your needs. ## Choosing a versioning scheme {#choose-version-scheme} The technology you're working with will, in some cases, determine the type of versioning scheme you choose. We recommend using [Semantic Versioning](#semver) for your applications, unless you are deploying artifacts to a [Maven repository](/docs/packaging-applications/package-repositories/maven-feeds), in which case, you need to use [Maven Versions](#maven). Consider the following factors when deciding on the versioning scheme you'll use for your applications and packages: 1. Can you trace a version back to the commit/check-in the application/package was built from? *For example: We stamp the SHA hash of the git commit into the metadata component of the Semantic Version for Octopus Deploy which makes it easier to find and fix bugs. We also tag the commit with the version of Octopus Deploy it produced so you can quickly determine which commit produced a particular version of Octopus Deploy.* 2. Can your users easily report a version to the development team that supports #1? 3. Will your version numbers be confusing, or will they help people understand the changes that have been made to the software? *For example: bumping a major version component (first part) means there are potentially breaking changes, but bumping a patch (3rd part) should be safe to upgrade, and safe to rollback if something goes wrong.* 4. Does your tool chain support the versioning scheme? *Octopus supports Semantic Versioning, which enables enhanced features like [Channels](/docs/releases/channels).* ## SemVer {#semver} Octopus supports Semantic Versioning 2.0.0 with version numbers constructed in the following way: > `Major.Minor.Patch` For instance: > `1.5.2` Octopus supports a *pragmatic* implementation of SemVer, including support for 4-digit versions: > `1.0.0.0` Octopus also supports versions that can be sorted alphanumerically: > `2016.09.01-beta.0001` In strict SemVer 2.0, a version like `1.5.2-rc.1` is considered a **pre-release**, and `1.5.2` is considered a **full release**. When it comes to application versioning, we suggest the pre-release tag (the bit after the `-`) can be used however works best for you. For example, you could build version `1.5.2-rc` of your application and configure a [Channel](/docs/releases/channels) to promote packages like `*-rc` to Staging and eventually Production. If you are using [deployment changes](/docs/releases/deployment-changes), note that pre-releases are handled differently to other releases by that feature and you may need to take that into consideration in your [versioning](/docs/releases/deployment-changes/#versioning) strategy. Learn more about Semantic Version at [semver.org](http://semver.org/). ### How Octopus Deploy treats semantic versions {#semantic-version-treatment} Octopus uses a string-based approach to version numbers. These are the decisions we made on handling versions: 1. **Validity:** A version string will be considered valid if it is strictly complaint with [SemVer 1.0](http://semver.org/spec/v1.0.0.html), [SemVer 2.0](http://semver.org/spec/v2.0.0.html), or Octopus's pragmatic 4-digit version of SemVer. 2. **Comparisons:** Versions will be compared using the "semantic" value: a. **Equality:** Two versions will be considered to be equal if they are semantically equivalent. For instance: i. `1.0.0.0 == 1.0.0` i. `2016.01.02 == 2016.1.2 == 2016.01.2` a. **Ordering:** Versions will be sorted semantically. For instance: i. `1.4.10 > 1.4.9` i. `3.0.0-beta.10 > 3.0.0-beta.9` i. `1.4.008 < 1.4.9` 3. **Package Feeds:** Octopus asks the feed for a package with the version string stored in the release, and accepts what the feed provides. ## Maven versions {#maven} Maven versions are used by Octopus when an artifact is sourced from an external [Maven feed](/docs/packaging-applications/package-repositories/maven-feeds/). SemVer is still required when versioning any artifact to be deployed to the [built-in](/docs/packaging-applications/package-repositories/built-in-repository) library or an external [NuGet feeds](https://docs.nuget.org/create/hosting-your-own-nuget-feeds), and the only time to use the Maven versioning scheme over SemVer is when you are deploying artifacts to a Maven repository. The Maven versioning scheme is implemented as a copy of the [ComparableVersion](https://github.com/sonatype/maven-demo/blob/master/maven-artifact/src/main/java/org/apache/maven/artifact/versioning/ComparableVersion.java) class from the Maven library itself. Maven version strings have 5 parts: * Major * Minor * Patch * Build number * Qualifier The Major, Minor, Patch, and Build number are all integer values. The Qualifier can hold any value, although some qualifiers have special meanings and an associated order of precedence as follows: * alpha or a * beta or b * milestone or m * rc or cr * snapshot * (the empty string) or ga or final * sp Qualifiers are case-insensitive, and some of the qualifiers have shorthand aliases, for instance, `alpha` and `a`. If you use an alias it must include a number, for instance, `a1`. If you do not include a number after the alias, it will be treated as an unrecognized qualifier which will be compared as a case-insensitive string after the qualified versions. Where version stings cannot be parsed as major.minor.patch.build and the qualifier is not recognized, the entire string is considered a qualifier. A dash or a period can be used to separate Major, Minor, Patch, and Build, however, using a separator between the last digit and the qualifier is optional. For an in-depth look at Maven versions, see the blog post [Maven Versions Explained](https://octopus.com/blog/maven-versioning-explained). ## Learn more - [Package your applications](/docs/packaging-applications). - [Create packages with Octopus CLI](/docs/packaging-applications/create-packages/octopus-cli). - [Creating packages with OctoPack](/docs/packaging-applications/create-packages/octopack). - [TeamCity plugin](/docs/packaging-applications/build-servers/teamcity). - [Azure DevOps plugin](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension). - [Package repositories](/docs/packaging-applications). - [Package deployments](/docs/deployments/packages). # Built-in Octopus repository Source: https://octopus.com/docs/packaging-applications/package-repositories/built-in-repository.md Your Octopus Server comes with a built-in repository which is the best choice for deployment packages. It offers **better performance** for your deployments and the most robust [retention policy](/docs/administration/retention-policies) support for cleaning up deployment packages. The built-in feed can only be consumed by Octopus. Octopus Server provides a write-only repository; intended for hosting deployment packages only. Packages that are pushed to the Octopus Server can't be consumed by other NuGet clients like Visual Studio. If you need a NuGet feed for sharing libraries between your development projects, a separate NuGet repository is required. See [package repositories](/docs/packaging-applications/package-repositories). ## Uploading packages to the built-in repository {#pushing-packages-to-the-built-in-repository} It is possible to manually upload a package file from your local machine via the Octopus Web Portal by navigating to **Deploy ➜ Manage ➜ Packages** and clicking the **Upload Package** button. However, we recommend using a [build server](/docs/packaging-applications/build-servers) to build, test, package and automatically upload your release packages into the Octopus Deploy built-in repository. In most cases you simply provide the build server with the URL to your Octopus Server and an [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key) with the required permissions (see [security considerations](/docs/packaging-applications/package-repositories/built-in-repository/#security-considerations)). In addition to manually uploading packages or using your build server, you can add, upload packages to the built-in feed in the following ways: - [Using the Octopus CLI](#UsingOctopusCli). - [Using the Octopus API (HTTP POST)](#UsingTheOctopusAPI(HttpPost)). - [Using NuGet.exe push](#UsingNuGetExePush). - [Using npm.exe, grunt or gulp](#UsingNpm.exe,GruntOrGulp). - [Using curl](#UsingCurl). To push packages using these methods, you will need: 1. The URL to your Octopus Server. 2. An [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key) with the required permissions (see [security considerations](/docs/packaging-applications/package-repositories/built-in-repository/#security-considerations)). ## Using the Octopus CLI {#UsingOctopusCli} You can upload one or more packages using the [Octopus CLI](/docs/packaging-applications/create-packages/octopus-cli), the command-line tool for Octopus Deploy. The example below will upload `MyApp.Website.1.1.0.zip` and `MyApp.Database.1.1.0.zip` to the built-in repository, automatically replacing existing packages if there are conflicts.
PowerShell ```powershell C:\> octopus package upload --package MyApp.Website.1.1.0.zip --package MyApp.Database.1.1.0.zip --overwrite-mode overwrite ```
Bash ```bash $ octopus package upload --package MyApp.Website.1.1.0.zip --package MyApp.Database.1.1.0.zip --overwrite-mode overwrite ```
## Using the Octopus API (HTTP POST) {#UsingTheOctopusAPI(HttpPost)} You can upload a package via the [Octopus Deploy API](/docs/octopus-rest-api) - `POST /api/packages/raw HTTP 1.1`. - [C# example](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/Octopus.Client/Csharp/Feeds/PushPackage.cs) - [PowerShell example](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Feeds/PushPackage.ps1) ## Using NuGet.exe push {#UsingNuGetExePush} To push a package using `NuGet.exe` you'll need the URL for the Octopus NuGet feed to use with your build server or `NuGet.exe`. To find this, open the **Deploy ➜ Manage ➜ Packages** tab of the Octopus Web Portal. The Help sidebar has options and examples of how to upload packages. The screen shows an example command-line that can be used to push packages to the feed using [NuGet.exe](http://docs.nuget.org/docs/start-here/installing-nuget). You'll need to supply the NuGet package file (`.nupkg`) and an [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key). :::figure ![The Built-in Package Repository](/docs/img/packaging-applications/package-repositories/built-in-repository/built-in-package-repository.png) ::: :::div{.success} If you're using a continuous integration server like TeamCity to produce packages you can use their built-in NuGet Push step. Supply the Octopus NuGet feed URL shown above and an [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key) when prompted for the feed details. ::: If a package with the same version exists, and you want to force the Octopus Server to replace it, you can modify the URL to include a `?replace=true` parameter: `http://MyOctopusServer/nuget/packages?replace=true` ## Using npm.exe, Grunt or Gulp {#UsingNpm.exe,GruntOrGulp} You can upload packages using npm.exe or using our grunt or gulp tasks. Take a look at our [guide for packaging and deploying Node.js applications using Octopus Deploy](/docs/deployments/node-js/node-on-linux). ## Using Curl {#UsingCurl} You can upload packages using **curl**. Like all the other examples you will need your Octopus Server URL and an API Key. This will perform a POST uploading the file contents as multipart form data. ```powershell curl -X POST https://demo.octopus.app/api/packages/raw -H "X-Octopus-ApiKey: API-YOUR-API-KEY" -F "data=@Demo.1.0.0.zip" ``` :::div{.success} You may need to use the `-k` argument if you are using an untrusted connection. ::: ## Security considerations {#security-considerations} To add a new package to the built-in feed requires the `BuiltInFeedPush` permission. To delete a package, or replace an existing package requires the `BuiltInFeedAdminister` permission. For your convenience Octopus Deploy provides a built-in role called **Package Publisher** that has been granted the `BuiltInFeedPush` permission. :::div{.hint} **Consider using a service account** Instead of using your own API key, consider using a [Service Account](/docs/security/users-and-teams/service-accounts) to provide limited permissions since packages will normally be pushed by an automated service like your build server. Service Accounts are API-only accounts that cannot be used to sign in to the Octopus Web Portal. ::: :::div{.hint} **Using built-in package repository triggers?** If you are using [built-in package repository triggers](/docs/projects/project-triggers/built-in-package-repository-triggers) you will also require the permissions to create a release for all the relevant projects in the required environments. To diagnose issues with pushing packages used for built-in package repository triggers follow the troubleshooting guide on the [built-in package repository triggers](/docs/projects/project-triggers/built-in-package-repository-triggers) page. ::: ## Moving the location of the built-in repository {#PackageRepositories-MovingTheLocationOfTheBuilt-InRepository} See [moving Octopus Server folders](/docs/administration/managing-infrastructure/server-configuration-and-file-storage/moving-octopus-server-folders/#move-octopus-home-folder). ## Built-in repository reindexing Octopus automatically re-indexes the built-in repository at startup to ensure that it is in sync. We do not recommend manually placing packages into the package store, however in certain limited circumstances (such as restoring a backup or a big package migration) it can be useful. For most users, this will be a seamless background task. However, for some installations, this may cause performance issues. Users with `AdministerSystem` rights can disable the re-indexing task on the **Deploy ➜ Manage ➜ Packages** page. Note that packages uploaded via the [recommended methods](/docs/packaging-applications/package-repositories/built-in-repository/#pushing-packages-to-the-built-in-repository) will still be indexed. ## Learn more - Generate an Octopus guide for [the Octopus built-in repository and the rest of your CI/CD pipeline](https://octopus.com/docs/guides). # Container registries Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries.md This section provides instructions how to set-up a number of Container registries from third-parties as external feeds for use within Octopus. # Nexus Hosted Maven repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/maven-repositories/nexus-maven-feed.md Both Nexus OSS and Nexus Pro offer three types of Maven repository, Hosted, Group, and Proxy. This guide will cover creating a Hosted Maven repository and adding it as an External Feed in Octopus Deploy. :::div{.info} This guide was written using Nexus OSS version 3.37.0-01 ::: ## Configuring a Hosted Maven repository From the Nexus web portal, click on the **gear icon** to get to the **Administration** screen. :::figure ![Administration gear Icon](/docs/img/packaging-applications/package-repositories/guides/images/nexus-nuget-administration.png) ::: Click on **Repositories** :::figure ![Repositories](/docs/img/packaging-applications/package-repositories/guides/images/nexus-repositories.png) ::: Click **Create repository** :::figure ![Create repository](/docs/img/packaging-applications/package-repositories/guides/images/nexus-create-repository.png) ::: Choose **maven2 (hosted)** from the list of repositories to create :::figure ![Maven (hosted)](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/nexus-maven-repository.png) ::: Give the repository a name and change any applicable configuration options. Click **Create repository** when you are done. :::figure ![Create repository](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/nexus-create-maven-repository.png) ::: When the repository has been created, click on the entry in the list to bring up the repository properties. :::figure ![MyNexusMavenRepo](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/nexus-mynexusmavenrepo.png) ::: Copy the URL property, that is what you will use when adding it as an external feed :::figure ![Repository URL](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/nexus-maven-url.png) ::: Optionally upload a package to the repository so you can verify search functionality when added as an external feed. ## Adding a Nexus Maven repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `Maven Feed` Feed type. Give the feed a name and in the URL field, paste the URL you copied earlier. It should look similar to this format: `https://your.nexus.url/repository/[repository name]` ![Nexus NuGet feed](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/nexus-maven-feed.png) # Artifactory Local NuGet repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories/artifactory-nuget-feed.md Artifactory provides support for a number of [NuGet repositories](https://jfrog.com/help/r/jfrog-artifactory-documentation/nuget-repositories) including Local, Remote and Virtual repositories. An Artifactory Local NuGet repository can be configured in Octopus as an external [NuGet feed](/docs/packaging-applications/package-repositories/nuget-feeds). ## Configuring an Artifactory Local NuGet repository :::div{.hint} This guide was written using Artifactory version `7.11.5`. ::: From the Artifactory web portal, navigate to **Administration ➜ Repositories**. From there, choose **Add Repositories ➜ Local Repository**: ![Artifactory repositories addition](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-repo-add.png) From the Package Type selection screen, choose **NuGet**: :::figure ![Artifactory local repository](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-repo-select.png) ::: Give the repository a name in the **Repository Key** field, and fill out any other settings for the repository. :::figure ![Artifactory local repository settings](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-repo-initial-settings.png) ::: When you've entered all settings, click **Save & Finish**. ### Configure repository authentication With the repository configured, the next step is to configure access so Octopus can retrieve package information. The recommended way is to either configure a [user](https://jfrog.com/help/r/jfrog-platform-administration-documentation/manage-users) with sufficient permissions, or use an [access token](https://jfrog.com/help/r/jfrog-platform-administration-documentation/access-tokens). This user is the account which Octopus will use to authenticate with Artifactory. :::div{.warning} Every organization is different and the authentication example provided here is only intended to demonstrate functionality. Ensure you are complying with your company's security policies when you configure any user accounts and that your specific implementation matches your needs. ::: From the Artifactory web portal, navigate to **Administration ➜ Identity and Access ➜ Users** and select **New User**. :::figure ![Artifactory Add user](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-user.png) ::: Fill out the **Username**, **Email Address**, **Password** and any other settings. :::div{.hint} If you have an existing group to add the user to, you can do that here. Alternatively you can add the user account when creating a new group. ::: When you've entered all settings, click **Save**. Next, we need to ensure the user is in a [group](https://jfrog.com/help/r/jfrog-platform-administration-documentation/manage-groups) which can access our new repository. From the Artifactory web portal, navigate to **Administration ➜ Identity and Access ➜ Groups** and select **New Group**. :::figure ![Artifactory Add Group](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-group.png) ::: Fill out the **Group Name** and any other settings. Ensure the user you created earlier is included in the group (in the right hand column). When you've entered all settings, click **Save**. Lastly, we need to ensure the group has [permissions](https://jfrog.com/help/r/jfrog-platform-administration-documentation/permissions) for Octopus to retrieve package information. From the Artifactory web portal, navigate to **Administration ➜ Identity and Access ➜ Permissions** and select **New Permission**. From there, give the permission a **Name**, and choose the **Add Repositories** option: :::figure ![Artifactory add permission](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission.png) ::: From the repository selection screen, choose the newly created repository so that it's in the **Included Repository** column and click **OK**: :::figure ![Artifactory add permission repository](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-repo.png) ::: Next, switch to the **Groups** tab, and add a new group from **Selected Groups**: :::figure ![Artifactory add permission group](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-add-group.png) ::: From the groups selection screen, choose the newly created group, or an existing group so that it's in the **Included Group** column and click **OK**. :::figure ![Artifactory permissions include group](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-include-group.png) ::: Finally, choose the permissions to grant the group on the included repositories: :::figure ![Artifactory repository permissions](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-repo-permissions.png) ::: :::div{.hint} Octopus needs `Read` permissions as a minimum on the Local repository in order to search and download packages. ::: When you've entered all settings, review your permissions are configured how you want, and click **Create**. :::div{.hint} You can also choose individual users to assign this permission to. ::: ### Anonymous authentication An alternative to configuring a user is to enable [anonymous access](https://jfrog.com/help/r/jfrog-artifactory-documentation/anonymous-access-to-nuget-repositories) on the NuGet repository. ## Adding an Artifactory Local NuGet repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `NuGet Feed` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the feed for your Artifactory Local repository in the format: `https://your.artifactory.url:port/artifactory/api/nuget/v3/local-nuget-repo` Replace the URL and port from the example above. In addition, replace `local-nuget-repo` with the name of your Local NuGet repository. :::figure ![Artifactory Local NuGet feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-feed.png) ::: Save and test your feed to ensure that the connection is authenticated successfully. # Email notification step Source: https://octopus.com/docs/projects/built-in-step-templates/email-notifications.md Deployments can have a strong impact on the people whose work depends on the system being deployed. Great communication is an important part of a great deployment strategy, and email steps are a key way that Octopus can help you keep everyone in the loop. You may want to: - Notify stakeholders when a new version of an app has been deployed to production. - Let testers know when a new version is available in UAT. - Use email in conjunction with [manual interventions approvals](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) to make sure everyone is ready for a new deployment. [Getting Started - Email Notifications](https://www.youtube.com/watch?v=VromFu8RYxc) Before you can add email steps to your deployment processes, you need to add your SMTP configuration. ## SMTP configuration To add your SMTP configuration navigate to **Configuration ➜ SMTP** and set the following values: | Property | Description | Example | | ------------------ | ------------------------------------ | ----------- | | SMTP Host | The DNS hostname for your SMTP server. | smtp.example.com | | SMTP Port | The TCP port for your SMTP server. | 25 | | Timeout | The timeout for SMTP operations. Value is in milliseconds. | 12000 (12 seconds) | | Use SSL/TLS | This option controls whether or not Octopus enforces using an SSL/TLS-wrapped connection. | True | | From Address | The address which all emails will be sent 'From'. | octopus@mydomain.com | | Credentials | Optional SMTP login / password if your SMTP server requires authentication. | mylogin@mydomain.com / SuperSecretPa$$word | Click **Save and test** to save the SMTP configuration and verify the values are valid: :::figure ![](/docs/img/projects/built-in-step-templates/images/smtp-configuration.png) ::: You will be prompted for an email address to send a test email to. Enter a test email address and click **Ok**. A *Send test email* task will start to verify your SMTP Configuration: :::figure ![](/docs/img/projects/built-in-step-templates/images/smtp-verify-task.png) ::: ### Google OAuth 2.0 Credentials Optionally you can use Workload Identity Federation and OAuth 2.0 for Google SMTP authentication. To do this, set the following values: | Property | Description | Example | | ------------------ | ------------------------------------ | ----------- | | Audience | The audience set on the Workload Identity Federation | `https://iam.googleapis.com/projects/{project-id}/locations/global/workloadIdentityPools/{pool-id}/providers/{provider-id}` | | Service Account | The email of the service account which has been granted access | service-account-name@{project-id}.iam.gserviceaccount.com | See the [Google cloud documentation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) for instructions on creating and configuring a Workload Identity Federation. When setting up the Workload Identity Federation: - When granting access to the service account, the principal must have the subject attribute name set to `smtp`. Example: `https://iam.googleapis.com/projects/{project-id}/locations/global/workloadIdentityPools/{pool-id}/subject/smtp`. - The service account must have domain wide delegation with an OAuth scope of `https://mail.google.com/`, see [documentation](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) on how to set this up. ### Microsoft OAuth 2.0 Credentials :::div{.warning} Support for Microsoft OAuth 2.0 authentication requires Octopus Server version 2025.2 ::: Optionally for Microsoft SMTP authentication, you can use Federated Credentials and OAuth 2.0. To do this, set the following values: | Property | Description | Example | | ------------------ | ------------------------------------ | ----------- | | Audience | The audience set on the Federated Credential | Defaults to `api://AzureADTokenExchange` | | Permission Scopes | The scopes to be included in the authentication token | Defaults to `https://outlook.office365.com/.default` | | Client ID | The Azure Active Directory Application ID/Client ID | GUID in the format xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | | Tenant ID | The Azure Active Directory Tenant ID | GUID in the format xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | For OAuth 2.0 you will need to: 1. Set up a Microsoft Entra ID App Registration. - See [documentation on registering an application](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app?tabs=federated-credential%2Cexpose-a-web-api#register-an-application). - Set the configuration properties `Client ID` and `Tenant ID` with the values from your registered application. 2. Add a Federated Credential. - See [documentation on adding credentials](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app?tabs=federated-credential%2Cexpose-a-web-api#add-credentials). - Set the Issuer value to a publicly accessible Octopus Server URI, this value must also not have a trailing slash (/). - Set the Subject Identifier value to `smtp`. - The Audience value can be left as the default, or set to a custom value if needed. - Set the `Audience` configuration property with the value from your federated credential. 3. Configure Microsoft Exchange SMTP settings. - Add SMTP permissions for your Entra AD application, see [documentation](https://learn.microsoft.com/en-gb/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth#add-the-pop-imap-or-smtp-permissions-to-your-microsoft-entra-application). - For Exchange Online access, ensure you have added the `SMTP.SendAsApp` Office 365 Exchange Online application permission and granted admin consent. - Register your application's service principal in Exchange, see [documentation](https://learn.microsoft.com/en-gb/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth#register-service-principals-in-exchange). :::div{.hint} From 2025.3, you can specify custom Permission Scopes to be included in the OAuth 2.0 authentication token. This supports the use of Azure Communication Services (ACS). To use this ensure your SMTP Username in Azure matches your specified `From Address`. More information can be found in the [ACS documentation](https://learn.microsoft.com/en-us/azure/communication-services/quickstarts/email/send-email-smtp/send-email-smtp-oauth). ::: ## Add an email step Email steps are added to deployment processes in the same way as other steps. 1. Navigate to your [project](/docs/projects). 2. Click **Process** and **Add step** to add a step to an existing process. Alternatively, if this is a new deployment process, click the **Create process** button. 3. Find the **Send an Email** step and click **Add step**. 4. Give the step a short memorable name. 5. Choose the recipients of the email. You have several options: - Enter a comma-separated list of email addresses. - Bind to a [variable](/docs/projects/variables) which defines a list of email addresses (this is useful for tailoring your recipient list per-environment). - Choose [one or more teams](/docs/security/users-and-teams) to include members of those teams in the recipient list. - Use a combination of all of these options. Octopus will build the resulting recipient list during the deployment, remove duplicate email addresses, and send the email to each recipient. 6. Provide a subject line for the emails. The subject can contain Octopus [basic variable syntax](/docs/projects/variables/variable-substitutions/#basic-syntax-variablesubstitutionsyntax-basicsyntax). 7. Add the body of the email. The email can be sent in plain text or HTML, and you can use Octopus [extended variable syntax](/docs/projects/variables/variable-substitutions/#extended-syntax) to include information about the deployment in the email. See the [email template examples](#email-template-examples) below. 8. You can set conditions to determine when the step should run. For instance: - Send the email only for successful deployments to certain environments. - Send a specific email for failed deployments. - Send an email based on the value of a variable expression which works well with [output variables](/docs/projects/variables/output-variables). 9. Save the deployment process. ## Email template examples You can set the email subject and author the email body as plain text or HTML content. You can even use the Octopus [variable syntax](/docs/projects/variables/variable-substitutions) to include information about the deployment in the email. ### Deployment summary template This template collects basic information about the deployment, including the package versions included in each step. ```xml

Deployment of #{Octopus.Project.Name} #{Octopus.Release.Number} to #{Octopus.Environment.Name}

Initiated by #{unless Octopus.Deployment.CreatedBy.DisplayName}#{Octopus.Deployment.CreatedBy.Username}#{/unless} #{if Octopus.Deployment.CreatedBy.DisplayName}#{Octopus.Deployment.CreatedBy.DisplayName}#{/if} #{if Octopus.Deployment.CreatedBy.EmailAddress} (#{Octopus.Deployment.CreatedBy.EmailAddress})#{/if} at #{Octopus.Deployment.Created}

#{if Octopus.Release.Notes}

Release notes

#{Octopus.Release.Notes}

#{/if}

Deployment process

The deployment included the following actions:

    #{each action in Octopus.Action}
  • #{action.Name} #{if action.Package.NuGetPackageId}— #{action.Package.NuGetPackageId} version #{action.Package.NuGetPackageVersion}#{/if}
  • #{/each}

View the detailed deployment log.

``` :::div{.hint} To use the template in your projects, replace `nuget.org` with the DNS name of your NuGet server, and `my-octopus` with the DNS name of your Octopus Server. Make sure you select *Body is HTML* on the email step configuration page. ::: The output of the template will be an HTML email like: :::figure ![](/docs/img/projects/built-in-step-templates/images/email-output.png) ::: ### Step status summary template The outcome of each step can be included using a template like the one below: ```xml

Task summary

    #{each step in Octopus.Step} #{if step.Status.Code}
  1. #{step | HtmlEscape} — #{step.Status.Code} #{if step.Status.Error}
    #{step.Status.Error | HtmlEscape}
    #{step.Status.ErrorDetail | HtmlEscape}
    #{/if}
  2. #{/if} #{/each}
``` :::div{.hint} **Step error detail** `step.Status.Error` and `step.Status.ErrorDetail` will only display the exit code and Octopus stack trace for the error. As we cannot parse the deployment log, we can only extract the exit/error codes. It cannot show detailed information on what caused the error. For full information on what happened when the deployment fails, you will need to reference the logs. See [system variables](/docs/projects/variables/system-variables) for more detail. ::: ### Referencing package metadata This example displays package ID and version numbers for any steps that reference a package. ```xml #{each action in Octopus.Action} #{if Octopus.Action[#{action.StepName}].Package.PackageId} PackageId: #{Octopus.Action[#{action.StepName}].Package.PackageId}
Package Version: #{Octopus.Action[#{action.StepName}].Package.PackageVersion} #{/if} #{/each} ``` :::div{.hint} Iterating over `Octopus.Action` like above is a useful way to retrieve data from all steps in your process without having to refer to a hard-coded step name that could potentially change. ::: #### Referencing additional package metadata Using [custom scripts](/docs/deployments/custom-scripts) you can include additional [reference packages](/docs/deployments/custom-scripts/run-a-script-step/#referencing-packages). This example displays package ID and version numbers for any steps that include additional reference packages. ```xml #{each action in Octopus.Action} #{each package in action.Package} #{if Octopus.Action[#{action.StepName}].Package[#{package}].PackageId} PackageId: #{Octopus.Action[#{action.StepName}].Package[#{package}].PackageId}
Package Version: #{Octopus.Action[#{action.StepName}].Package[#{package}].PackageVersion} #{/if} #{/each} #{/each} ``` # Custom step templates stored in Git Source: https://octopus.com/docs/projects/custom-step-templates/custom-step-templates-stored-in-git.md Since Octopus 2023.4, it is now possible to create [custom step templates](/docs/projects/custom-step-templates) with scripts sourced directly from Git. To start, use the same steps you would normally take to create a custom step template. Just be sure to select a compatible step as some steps aren’t suitable for being sourced from Git. ## Git compatible base steps The built-in steps listed below are compatible with being sourced from Git and can be used for custom step templates: - [Run a Script](/docs/deployments/custom-scripts/run-a-script-step) - [Run an Azure Script](/docs/deployments/azure/running-azure-powershell#running-scripts-in-octopus-cloud) - [Run an AWS CLI Script](/docs/deployments/custom-scripts/aws-cli-scripts) - [Run gcloud in a Script](/docs/deployments/google-cloud/run-gcloud-script) - [Deploy an Azure Resource Manager template](/docs/runbooks/runbook-examples/azure/resource-groups) - [Run a Service Fabric SDK PowerShell Script](/docs/deployments/custom-scripts/service-fabric-powershell-scripts) - [Run a kubectl script](https://octopus.com/blog/custom-kubectl-scripting-in-octopus) - [Deploy Kubernetes YAML](/docs/kubernetes/steps/yaml) - [Deploy a Helm Chart](/docs/kubernetes/steps/helm) - [Deploy with Kustomize](/docs/kubernetes/steps/kustomize) - [Deploy a Bicep template](https://octopus.com/blog/using-the-deploy-a-bicep-template-step) - [Deploy an AWS CloudFormation template](/docs/deployments/aws/cloudformation) - [Apply a Terraform template](/docs/deployments/terraform/apply-terraform-changes) - [Destroy Terraform resources](/docs/deployments/terraform/apply-terraform-changes) - [Plan to apply a Terraform template](/docs/deployments/terraform/plan-terraform) - [Plan a Terraform destroy](/docs/deployments/terraform/plan-terraform) *Note: This is not a complete list as it is anticipated that additional steps will be added* You may use the filter at the top to help find a step to base your custom step template on: ![Base Step Filter](https://github.com/user-attachments/assets/bdae8828-02ab-41c2-b0a1-3604640c955b) ## Source Git compatible base steps for custom step templates will provide an option to select a source. The name of this option can differ depending on the step, including: - Script Source - Template Source - Chart Source To use Git as the applicable source, simply select the **Git repository** option in the Step tab. ![Script Source](https://github.com/user-attachments/assets/8a1d4c44-6865-4a3a-832c-206fb9a9f4b6) Once **Git repository** is selected, additional options will appear below in the **Step** tab. Below are common examples while certain base steps may differ. ## Repository URL In this section, you will specify the full URL to the root of your target repository. ![Repository URL](https://github.com/user-attachments/assets/5b4e9bb6-04a1-44d6-9ade-c8d306625c35) ## Authentication Unlike database sourced custom step templates, authentication is typically required to access the repository holding the script. Git credentials can be added to a Space by navigating to **Deploy ➜ Manage ➜ Git Credentials** or via the + button in the Authentication section of the **Step** tab. Use the drop-down arrow to select the appropriate Git credentials once they have been added. If newly added Git credentials aren’t showing up, click on the **circular refresh button** next to the drop-down arrow. ![Authentication](https://github.com/user-attachments/assets/176e86ee-5155-4b3f-bb2a-cf866ee7bf04) ## Branch Settings In this section, you will specify the default branch name. ![Branch Settings](https://github.com/user-attachments/assets/46298d3c-f28c-45c3-be42-41effac326e0) ## Path Similar to **Source**, the **Path** section will be titled differently depending on the base step type. Examples include: - Script File Path - Template Path - Chart Directory Any of the above allows you to specify a relative path from the root of the Git repository to the targeted item. Using the example repository of `https://Github.com/OctopusSamples/OctoPetShop.Git` and a target file residing at `Scripts/MyScript.sh` within the repository, simply use **Scripts/MyScript.sh** here as shown below: ![Path](https://github.com/user-attachments/assets/15666ac4-542c-432f-93ca-5057ed3e4f68) ## Parameters and other options Different base steps used for custom step templates sourced from Git may have additional options such as **Script Parameters** and other options specific to that type of step. You may refer to the instructions found in the UI for these options or [relevant step pages](#git-compatible-base-steps) in our documentation for more information. ## Version management For custom step templates sourced from Git, aside from the specified target item, only some of the information relating to the step template is stored in Git. Everything in the **Step**, **Parameter**, and **Settings** tab is stored in the Octopus database. Once a step template is added to a project, an entry is added to the **Usage** section (located just under the title of the step template). Within **Usage**, there are two tabs: - Version-Controlled Projects - Database-Backed Projects Git sourced custom step templates work just like standard step templates in that they are compatible with both types of projects. However, the version displayed on the usage page is only incremented by database changes to a given custom step template. Git commits that change or update the item sourced from Git are not reflected in the version numbers shown on the usage page for Git sourced custom step templates. This is handled separately when: - creating a release - creating runbook snapshot (database sourced) - running a Git sourced runbook ## Selecting Git sourced custom step templates versions Octopus offers three ways to select a Git sourced custom step template version, including: - Branches - Tags - Commits All three options correspond to the listed repository. When creating a new release, you can see the selection that was made for the previous release. You will notice an icon corresponding to the adjacent branch name, tag, or commit hash. ![Select By Branch Tag Or Commit](https://github.com/user-attachments/assets/dab6f6eb-943e-4cf7-878f-e908921d6bb2) In the API, this information can be found for a given Release ID under **SelectedGitResources**: ![SelectedGitResources](https://github.com/user-attachments/assets/7840cbb9-7fd0-4590-bb77-d81852b3ccc1) ## Git Protection Rules Similar to packages, you also have the option to implement [Git Protection Rules](/docs/releases/channels#git-protection-rules) for custom step templates stored in Git. ## Additional resources Using Git resources directly in deployments: [Octopus Blog](https://octopus.com/blog/git-resources-in-deployments) Octopus 2023.4 - Sourcing scripts from Git: [YouTube](https://www.youtube.com/watch?v=waUktRhFY-g) # Deployment process Source: https://octopus.com/docs/projects/deployment-process.md Now that you have access to an Octopus Server, your [infrastructure is configured](/docs/infrastructure/), and your [applications packaged](/docs/packaging-applications), you're ready to start deploying your software. A deployment process is a set of steps that Octopus Server orchestrates to deploy your software. Each project has a single deployment process. You define your deployment processes by creating projects and then adding steps and variables to the project. Each step contains a specific action (or set of actions) executed as part of the deployment process each time you deploy your software. Octopus has over 300+ built-in and community-contributed step templates for deploying almost anything. Once you have set up a deployment process, you won't need to change it between deployments. However, you can add or edit steps anytime as your process or infrastructure changes. ![A simple deployment process in Octopus Deploy](/docs/img/shared-content/concepts/images/deployment-process.png) ## A Hello world deployment process To define a simple deployment process in Octopus that executes a hello world script on the Octopus Server, complete the following steps: 1. Navigate to **Projects**. 2. Select **Add Project**. 3. Name the project, for instance, `Hello world`, and click **Save**. 4. Click **Create Process**. 5. Choose the type of step you'd like to add to filter the available steps: **Script**. 6. Find the **Run a Script** step and click **Add Step**. 7. In the process editor, give the step a name, for instance, `Run Hello world script`. 8. In the **Execution Location** section, select **Run on the Octopus Server**. 9. Paste the following PowerShell script into the **Inline Source Code** editor: ``` Write-Host "Hello, World!" ``` 1. Click **Save**. You now have a simple hello world deployment process. :::div{.info} If you're using Octopus Cloud you can't run scripts directly on the Octopus Server. Instead, you can select **Run once on a worker** which will run the script on a [dynamically provisioned worker](/docs/infrastructure/workers/dynamic-worker-pools). ::: ## Create a release 1. From the process page, click **Create Release**, and then click **Save**. 1. Click **Deploy to Development...**, then click **Deploy**. This will deploy the release. In the task summary, you'll see the release was deployed to your *Development* environment, and the step *Run Hello world script* ran on the Octopus Server or selected worker. :::figure ![Hello world task summary](/docs/img/projects/deployment-process/images/hello-world.png) ::: This is an example of a very simple process. The following sections go into more detail about each part of the process. ## Projects Before you can define how your software is deployed, you must create a project for the deployment process. Projects contain the deployment steps and configuration variables that define how your software is deployed. Learn more about [projects](/docs/projects). ## Lifecycles Lifecycles control how your software is promoted through your environments and which projects are associated with which environments. Learn more about [lifecycles](/docs/releases/lifecycles). ## Deployment steps Steps contain the actions your deployment process will execute each time your software is deployed. Deployment processes can have one or many steps, steps can run in sequence or parallel, in addition to a variety of deployment steps, you can include manual intervention steps to get sign off before deployment, email notification steps to keep everybody informed about your process, or even skip steps under different circumstances. Learn more about [steps](/docs/projects/steps). ## Configuration features When you deploy your software, it needs to be configured for the specific environments it will be deployed to. Configuration files let you define custom installation directories, database connections, and other settings that make it possible to deploy your software. Learn more about [configuration features](/docs/projects/steps/configuration-features). ## Variables Octopus supports variables to make it easier to define application settings for your deployment processes without the need to hardcode them. For instance, you might use different connection strings for apps deployed to Test and Production. Variables let you define these settings and then refer to them by the variable name throughout the deployment process, meaning you don't have to manually change them between deployments, or even give them much thought after the variables and deployment process have been defined. Learn more about [variables](/docs/projects/variables). ## Conditions You can specify run conditions on the steps that you define to give you greater control over the deployment process. Learn more about [conditions](/docs/projects/steps/conditions). ## Deploying releases In Octopus you create releases to be deployed. Projects have multiple releases and releases can be deployed multiple times across different infrastructure. Learn more about [releases](/docs/releases). ## Working with the Octopus API Octopus Deploy is built API-first, which means everything you can do through the Octopus UI can be done with the API. In the API we model the deployment process the same way, starting at the project: - Project - Deployment process - Steps - Actions We have provided lots of helpful functions for building your deployment process in the [.NET SDK](/docs/octopus-rest-api/octopus.client), or you can use the raw HTTP API if that suits your needs better. Learn about using the [Octopus REST API](/docs/octopus-rest-api). :::div{.success} Record the HTTP requests made by the Octopus UI to see how we build your deployment processes using the Octopus API. You can do this in the Chrome developer tools, or using a tool like Fiddler. ::: # Artifacts Source: https://octopus.com/docs/projects/deployment-process/artifacts.md Artifacts in Octopus provide a convenient way to collect files from remote machines and copy them to the Octopus Server. Examples of where artifacts may be useful are: - Collecting log files from other programs. - Copying configuration files to inspect values. Artifacts can be collected from anywhere Octopus runs scripts - for example, the [Script Console](/docs/administration/managing-infrastructure/script-console/) or [custom scripts](/docs/deployments/custom-scripts) in a deployment. Artifacts are uploaded to the Octopus Server after a script runs. You can download them from the task output or via the [Octopus API](/docs/octopus-rest-api). :::figure ![](/docs/img/projects/deployment-process/images/artifacts-access.png) ::: ## Collecting artifacts using scripts You can collect artifacts using any of the scripting languages supported by Octopus. In each scripting language you can specify the path to the file you want to collect as an artifact as an absolute path, or a path relative to the current working directory. By default, the file name will be used as the artifact name, but you can provide a custom name for the artifact as an alternative.
PowerShell ```powershell # Collect a custom log file from the current working directory using the file name as the name of the artifact New-OctopusArtifact "output.log" # Collect all .xml files contained in the current working directory recursing subdirectories Get-ChildItem . -Recurse -Include *.xml | New-OctopusArtifact # Collect the hosts file but using a custom name for each machine so you can differentiate between them # Note: to collect this artifact would require the Tentacle process to be elevated as a high privileged user account New-OctopusArtifact -Path "C:\Windows\System32\drivers\etc\hosts" -Name "$([System.Environment]::MachineName)-hosts.txt" ```
C# ```csharp // Collect a custom log file from the current working directory using the file name as the name of the artifact CreateArtifact("output.log"); // Collect the hosts file but using a custom name for each machine so you can differentiate between them // Note: to collect this artifact would require the Tentacle process to be elevated as a high privileged user account CreateArtifact(@"C:\Windows\System32\drivers\etc\hosts", System.Environment.MachineName + "-hosts.txt"); ```
Bash ```bash # Collect a custom log file from the current working directory using the file name as the name of the artifact new_octopusartifact output.log # Collect the hosts file but using a custom name for each machine so you can differentiate between them # Note: to collect this artifact would require the SSH user account to be elevated as a high privileged user account new_octopusartifact /etc/hosts $(hostname)-hosts.txt ```
F# ```fsharp // Collect a custom log file from the current working directory using the file name as the name of the artifact Octopus.createArtifact "output.log" // Collect the hosts file but using a custom name for each machine so you can differentiate between them // Note: to collect this artifact would require the Tentacle process to be elevated as a high privileged user account Octopus.createArtifact @"C:\Windows\System32\drivers\etc\hosts" (Some (System.Environment.MachineName + "-hosts.txt")) ```
Python3 ```python # Collect a custom log file from the current working directory using the file name as the name of the artifact createartifact("output.log") # Collect the hosts file but using a custom name for each machine so you can differentiate between them # Note: to collect this artifact would require the Tentacle process to be elevated as a high privileged user account import os createartifact("C:\Windows\System32\drivers\etc\hosts", "{}-hosts.txt".format(os.environ["COMPUTERNAME"])) ```
### Collecting artifacts with execution containers You can collect artifacts from steps used with the [execution container for workers](/docs/projects/steps/execution-containers-for-workers) feature too. The source file for the artifact must be saved and collected from the **fully qualified path** of one of the directories (or subdirectories) mapped into the execution container as a volume. The recommended volume to use is the temporary directory created within the `/Work` workspace, for example, `/etc/octopus/Tentacle/Work/20221128114036-119427-56`. Once the artifact has been collected, the directory and its contents will be removed once the step has been executed. Its value can be found in the `PWD` environment variable. The following script would collect an artifact called `foo.txt` from the temporary working directory using the `$PWD` environment variable:
Bash ```bash Bash echo "Hello" > $PWD/foo.txt new_octopusartifact $PWD/foo.txt ```
PowerShell ```powershell PowerShell "Hello" > "$($PWD)/foo.txt" New-OctopusArtifact "$($PWD)/foo.txt" ```
## Security concerns ### File privileges If you want to collect a file as an artifact, your script must be able to access and read that file. In most cases, files produced by your deployment were produced in the same security context as your running script, and everything will just work. In some cases you may want to collect certain files from the operating system which require elevated privileges, or perhaps a special user account. If you are using the Tentacle agent, make sure the Tentacle process is running as a user account with access to the file. If you are using an SSH connection, make sure the SSH user account has access to the file. ### Sensitive information Artifacts are collected by Octopus as-is to maintain the integrity of the files. If the files you want to collect contain sensitive information you should take care to scrub or mask that sensitive information before telling Octopus to collect the artifact. ```powershell # Get hold of the variables from Octopus $username = $OctopusParameters["Database.Username"] $password = $OctopusParameters["Database.Password"] $reportFilePath = "upgrade-report.txt" # Perform the operation as part of your deployment, writing the results to the report file MyDatabaseUpgrader.exe -reportPath=$reportFilePath # Scrub sensitive values from report $mask = '*****' (Get-Content $reportFilePath) -replace $username, $mask -replace $password, $mask | Set-Content $reportFilePath # Now collect the scrubbed artifact New-OctopusArtifact $reportFilePath ``` # Setting up projects and project groups Source: https://octopus.com/docs/projects/setting-up-projects.md You can manage your projects by navigating to the **Projects** tab in the Octopus UI: :::figure ![Octopus Dashboard](/docs/img/projects/octopus-projects-list.png) ::: If you have already created projects, or are joining an existing team, you'll see the existing projects on the projects page. ## Add a project Before you can define your deployment processes or runbooks, you must create a project: 1. Select **Projects** from the main navigation, and click **ADD PROJECT**. 2. Give the project a name. 3. Click **SHOW ADVANCED**. 4. Add a description for the project. 5. If you want to change the [Project group](#project-group) select an existing project group from the drop-down menu. 6. If you want to change the [Lifecycle](/docs/releases/lifecycles) select an existing lifecycle from the drop down menu. 7. Click **SHOW LIFECYCLE** if you'd like to see a visual representation of the selected lifecycle. 8. Click **SAVE** and you will be taken to the newly created project's overview page. Now that you've created a project, you can define your [deployment process](/docs/projects/deployment-process/) or [runbooks](/docs/runbooks). ## Project settings You can change the projects settings by accessing the settings menu on the project's main page. The settings you can change are: - Name - Enable or disable the project to allow or prevent releases and deployments from being created - [Project logo](#project-logo) - Description - [Project group](#project-group) - [Release versioning](/docs/releases/release-versioning) - [Release notes template](/docs/releases/release-notes#templates) ## Project tags {#project-tags} :::div{.warning} From Octopus Cloud version **2025.4.3897** we support tagging projects. ::: You can apply tags to projects to classify and organize them with custom metadata. This allows you to: - Classify projects by attributes like team, application type, or technology stack. - Configure your deployment dashboard to display only projects with specific tags. - Use project tags in dashboard advanced filters to customize your view. :::div{.hint} Only tags from tag sets that have been configured with the **Project** scope can be used to tag projects. ::: Learn more about [tag sets](/docs/tenants/tag-sets), including tag set types, scopes, and how to create and manage them. ## Deployment settings - Package re-deployment - Specify to always deploy all packages or to skip any package steps that are already installed. - Deployment targets - Specify if deployments are allowed if there are no deployment targets: - Deployments with no target are allowed - There must be at least one enabled healthy target to deploy to in the environment. - Allow deployments to be created when there are no deployment targets - Use this where no steps in the process have targets (or are all run on the Server), or you are dynamically adding targets during deployment. - Deployment target status - Choose to skip unavailable, or exclude unhealthy targets from the deployment. - [Deployment changes template](/docs/releases/deployment-changes#templates) - Specify a template for each deployment's changes. - Default failure mode - Specify whether or not to use [guided failure mode](/docs/releases/guided-failures). ## Project logo \{#project-logo} Customize your project logo to make it easily identifiable amongst other projects. 1. From the project's main page, select **Settings**. 2. Click the **Logo** section of the settings page. 3. Select from our built-in icon library paired with your choice of color or upload a custom image. 4. Click **Save**. :::div{.hint} For custom images, in addition to supporting .jpg and .png files, we also support .gif files. This means you can have an animated icon to add a little flair to your Octopus Deploy instance! ::: ## Project group \{#project-group} Project groups are a great way to organize your deployment projects. They have many uses; not only do they visually separate the projects, but you can also configure the dashboard to hide/show specific project groups and configure permissions to restrict access to them. :::div{.hint} The *Default Project* group contains all projects that have not been added to another group. ::: ## Add a project group 1. From the **Projects** tab, click **ADD GROUP**. 1. Give the group a name and description. 1. Click **SAVE**. When the group is first created and doesn't have any projects associated with it, you will need to click **SHOW EMPTY GROUPS** on the projects page to see the group. ## Add projects to a group After you have created a project group there are a number of ways you can add projects to the group: - Navigate to the **Projects** page from the main navigation, find the group you want to add the project to, and click **ADD PROJECT**. - Edit an existing project by navigating to the project, selecting **Settings** and editing the **Project Group**. - Specify the **Project Group** under **Advanced Settings** when you create a new project. ### Edit or delete project groups To edit or delete a project group click the project group's overflow menu (...) and select **edit**. From there you can edit the groups name or description. If you need to delete the group, click the overflow menu again and select **Delete**. ## Project permissions For information about project permissions, see [managing users and teams](/docs/security/users-and-teams). ## Clone a project Projects can be cloned. 1. From the project's menu, select **Settings**. 2. Click the overflow menu (...), and select **Clone**. :::figure ![Clone a project](/docs/img/projects/images/clone-project.png) ::: :::div{.warning} **Version-controlled projects are not currently supported** ::: 3. Give the new project you are cloning from the original project a name. 4. Review the settings for the new project and when you are satisfied, click **SAVE**. After you've cloned a project, you can see details about where your project was cloned from and which projects have been cloned from your project, by navigating to the project's overview page and selecting **Settings** and looking at the **Cloning History** section. # Configuration features Source: https://octopus.com/docs/projects/steps/configuration-features.md One of the essential steps in deploying software is configuring it to work with specific environments. This might mean pointing your application to the right database connection string, tweaking settings to run in production, or specifying a custom installation directory. Many of the steps that you define as part of your deployment process have additional configuration features available. ## Enable configuration features You enable configuration features as you define the [steps](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process). 1. If the step you are defining has configuration features available, there is a **CONFIGURE FEATURES** link. Click the link. 1. Select the features you would like to enable by clicking the relevant check-boxes in the list and click **OK**. :::figure ![Configuration features screenshot](/docs/img/projects/steps/configuration-features/images/configuration-features.png) ::: The features you have enabled will now be available in the **Features** section of the step you are defining. You can configure the following features: - [Custom installation directory](/docs/projects/steps/configuration-features/custom-installation-directory) - [IIS web site and application pool](/docs/projects/steps/configuration-features/iis-website-and-application-pool) - [Windows Service](/docs/projects/steps/configuration-features/windows-services) - [Custom deployment scripts](/docs/deployments/custom-scripts) - [Structured configuration variables](/docs/projects/steps/configuration-features/structured-configuration-variables-feature) - [Configuration variables](/docs/projects/steps/configuration-features/xml-configuration-variables-feature) - [.NET Configuration transforms](/docs/projects/steps/configuration-features/configuration-transforms) - [Substitute variables in templates](/docs/projects/steps/configuration-features/substitute-variables-in-templates) - IIS6+ Home directory - [NGINX Web Server](/docs/projects/steps/configuration-features/nginx-web-server) - Red Gate database deployment # Custom installation directory Source: https://octopus.com/docs/projects/steps/configuration-features/custom-installation-directory.md The custom installation directory feature is one of the [configuration features](/docs/projects/steps/configuration-features/) you can enable as you define the [steps](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process). You can specify a custom installation directory for [package](/docs/deployments/packages/) and [IIS](/docs/deployments/windows/iis-websites-and-application-pools) steps. The custom installation directory feature deploys your package to a specified location on the target server. This feature helps when you are using an application that requires your files be in specific locations, such as many Content Management Systems (CMS). Only use the *custom installation directory* feature when you really need it. The standard convention for deploying packages is often the best and simplest way to deploy your packages, and it eliminates problems caused by file locks and stale files being left in the deployment folder. It also provides smoother deployments and less downtime for Windows Services and Web Applications, so before you configure a custom installation directory, review the [package deployment convention](/docs/deployments/packages/) and [package deployment feature ordering](/docs/deployments/packages/package-deployment-feature-ordering) to be certain that you really need to configure a custom installation directory. ## Add a custom installation directory 1. From your *Package Deploy* or *IIS* [step](/docs/projects/steps), click the **Configure Features** link. 2. Check the **Custom Installation Directory** check-box and click **Ok**. :::figure ![Custom Installation Directory option](/docs/img/projects/steps/configuration-features/images/custom-installation-directory.png) ::: When you return to your deployment process, you will see the **Custom Install Directory** option has been added to the **Features** section of the deployment process. 3. Add the [step](/docs/projects/steps) details: - Enter a name for the step. - Select the targets where the step should run. - Select the [package feed](/docs/packaging-applications/package-repositories/) where the [package](/docs/packaging-applications) will be available. - Enter the [package ID](/docs/packaging-applications/#package-id) for the package to be deployed. 4. Enter the path for the **custom installation directory**, or you can insert a [variable](/docs/projects/variables) if you have defined the path as a variable. Defining a [variable](/docs/projects/variables) with the directory path, means you can scope different values to different environments. For instance: | Variable Name | Value | Scope | | ----------------------- | --------------- | -------- | | CustomInstallDirectory | \path\to\test\directory\ | Test | | CustomInstallDirectory | \path\to\production\directory\ | Production | Read more about [variables](/docs/projects/variables). 5. If you would like to remove existing files from the custom installation directory before your deployed files are copied to it, check the **Purge** check-box. 6. If there are files you would like to exclude from the purge, add the files and directories you want to keep to the **Exclude from purge** list. The **Exclude from purge** list must be a newline-separated list of file or directory names, relative to the installation directory. To exclude an entire directory specify it by name without a wildcard. Extended wildcard syntax is supported. For instance: ``` appsettings.config Config Config\*.config **\*.config ``` 7. Add any [conditions](/docs/projects/steps/conditions) you need to specify for the step, and then click **SAVE**. This will save and display the step you've just created. From here you can use the project overview menu to continue defining your [deployment process](/docs/projects/deployment-process/), or click **CREATE RELEASE** to create a [release](/docs/releases) and deploy your application. Packages deployed to a custom installation directory are deployed in the same way as other package deploy steps. Read about [how packages are deployed](/docs/deployments/packages/#how-packages-are-deployed) for more information. # Bulk connection Source: https://octopus.com/docs/projects/tenants/bulk-connection.md Using the bulk tenant connection feature, you can connect tens, hundreds or thousands of tenants to a project in a single operation. 1. From the project's main page, select **Tenants**. 2. Click **Connect Tenants** 3. Choose the tenants you want to connect to your project, by clicking any tenant in the left-hand panel of the wizard. Click the **-** button of a tenant in the right-hand panel to deselect that tenant. 4. Once you have selected the tenants you want to connect, click **Next**. 5. Choose the [environments](/docs/infrastructure/environments) you want the selected tenants to be connected to. You can select just one or two from the drop-down menu, or click **Assign all available environments** to select all available environments. 6. A preview of the selected tenants and environments is shown in the Connection preview panel. Once you are happy with the selected tenants and environments they will be connected to, click **Connect \ Tenants** 7. Octopus will start connecting your selected tenants to the project in the background. You can navigate away from the page and Octopus will continue the operation until it's done. :::div{.hint} If some of your tenants should be connected to a different subset of environments, you can perform a bulk connection for each unique set of environments. For example, if majority of your tenants should be connected to the `Production` environment, but a small number of tenants should be connected to both `Test` and `Production`, you would perform two bulk connection operations. ::: ### Filtering during tenant selection :::figure ![](/docs/img/projects/tenants/bulk-connection-filters.png) ::: You can use the Name and Tenant Tag filters to find a specific tenant or set of tenants to connect to your project. Tenant Tag filters can be accessed by clicking **Expand Filters**. When filters are active, clicking **Select all \ results** will add all tenants that match your filters to your selection. You can perform multiple rounds of filtering and selecting to select the exact set of Tenants you want to connect to the project. ### During the connection operation :::figure ![](/docs/img/projects/tenants/bulk-connection-in-progress.png) ::: A status indicator will show the progress of the operation, and the tenant list will be updated as tenants are connected. You can navigate away from the page at any time, and the operation will continue. All users with permission to view the project will be able to see the progress of the connection. Only one bulk connection may be performed at a time, per project. If there's a connection operation already in progress for this project, **CONNECT TENANTS** will be disabled until it finishes. ### After the connection operation :::figure ![](/docs/img/projects/tenants/bulk-connection-completed.png) ::: The results of the most recent connection operation for a project will be shown for 24 hours after the operation completes. ## Older versions The project bulk tenant connection feature is available from Octopus Deploy **2023.3** onwards. # Variable substitutions Source: https://octopus.com/docs/projects/variables/variable-substitutions.md Variable substitutions are a flexible way to adjust configuration based on your [variables](/docs/projects/variables/) and the context of your [deployment](/docs/projects/deployment-process). You can often tame the number and complexity of your variables by breaking them down into simple variables and combining them together using expressions. ## Binding variables \{#binding-variables} You can use Octopus's special binding syntax to reference a variable from within the value of another variable. This is sometimes referred to as using **Composite variables**, because you compose a variable value with other Octopus variables. In the following example, the `ConnectionString` variable references the variables `{Server}` and `{Database}`. | Name | Value | Scope | | ------------------ | ---------------------------------------------- | ----------------- | | Server | SQL | Production, Test | | Database | PDB001 | Production | | Database | TDB001 | Test | | ConnectionString | Server=#\{Server\}; Database=#\{Database\} | | In regular variable declarations, binding to a non-existent value will yield an empty string, so evaluating `ConnectionString` in the *Dev* environment will yield `Server=;` because no `Database` or `Server` are defined for that environment. If the file undergoing variable replacement includes a string that *shouldn't* be replaced, for example `#{NotToBeReplaced}`, you should include an extra hash (#) character to force the replacement to ignore the substitution and remove the extra #. | Expression | Value | | --------------------- | -------------------- | | `##{NotToBeReplaced}` | `#{NotToBeReplaced}` | Variable substitution within a hash-delimited string looks like the following. Given the variable: | Name | Value | | -------- | ------- | | `Name` | `title` | `###{Name}` would evaluate to `#title`. :::div{.info} Also read about [common mistakes for variables](/docs/projects/variables/sensitive-variables/#avoiding-common-mistakes) for more information ::: ## Using variables in step definitions \{#use-variables-in-step-definitions} Binding syntax can be used to dynamically change the values of deployment step settings. If [variables are scoped](/docs/projects/variables/getting-started/#scoping-variables), this makes it really easy to alter a deployment step settings based on the target environment. Most text fields that support binding to variables will have a variable insert button: :::figure ![Variable insert button on text fields that support variable binding](/docs/img/projects/variables/images/3278296.png) ::: For settings that support variables but aren't text (such as drop downs or check-boxes), a button is displayed to toggle custom expression modes: :::figure ![Toggling custom expression modes for settings that support variables but aren't text](/docs/img/projects/variables/images/3278297.png) ::: ## Extended syntax \{#extended-syntax} Octopus supports an extended variable substitution syntax with capabilities similar to text templating languages. It's worth noting that this is now available everywhere whereas previously it was limited to certain scenarios. The capabilities of the extended syntax are: - [Index Replacement](#index-replacement) - [Calculation](#calculation) - `calc` - [Conditionals](#conditionals) - `if`, `if-else` and `unless` - [Repetition](#repetition) - `each` - [Filters](#filters) - `HtmlEscape`, `Markdown` etc. - [Differences from regular variable bindings](/docs/projects/variables/variable-filters/#differences-from-regular-bindings) - [JSON Parsing](/docs/projects/variables/variable-filters/#json-parsing) :::div{.hint} [Octostache](https://github.com/OctopusDeploy/Octostache) is the open source component that powers this feature. ::: ### Index replacement \{#index-replacement} Variable substitution inside an index makes it easy to dynamically retrieve variables within arrays/dictionaries. Given the variables: | Name | Value | Scope | | ------------------- | ----------- | ----- | | `MyPassword[Rob]` | `passwordX` | | | `MyPassword[Steve]` | `passwordY` | | | `MyPassword[Mary]` | `passwordZ` | | | `UserName` | `Mary` | | `#{MyPassword[#{UserName}]}` would evaluate to `passwordZ`. ### Calculations \{#calculation} Basic mathematical calculations are supported in Octopus using the `calc` statement, and four main operators: - Addition - `+` - Subtraction - `-` - Multiplication - `*` - Division - `/` :::div{.warning} When using a variable on the left-hand-side of a divide (`/`) or subtraction (`-`) operation, the variable name must be enclosed in braces (`{ ... }`) to ensure correct parsing, this ensures the operator symbol is recognized as an operation, rather than part of the variable name. ::: Given the variables: | Name | Value | Scope | | --------------------- | -----------------| ----- | | `IPOffset[Primary]` | `0` | | | `IPOffset[Secondary]` | `180` | | | `ScaleFactor` | `12` | | | `Numbers` | `10,20,30,40,50` | | | `My/Var` | `15` | | - `192.168.0.#{calc IPOffset[Primary] + 1}` would evaluate to `192.168.0.1` - `192.168.0.#{calc IPOffset[Secondary] + 1}` would evaluate to `192.168.0.181` - `#{calc 22 * ScaleFactor}` would evaluate to `264` - `#{each i in Numbers}#{calc i + 5}#{/each}` would evaluate to `15 25 35 45 55` - `#{calc {My/Var} / 3}` would evaluate to `5` - `#{calc 2 * My/Var}` would evaluate to `30` - `#{calc {My/Var} - 4}` would evaluate to `11` - `#{calc 22 - My/Var}` would evaluate to `7` ### Conditionals \{#conditionals} Two conditional statements are supported in Octopus - `if` and `unless`; these have identical syntax, but `if` evaluates only if the variable is *truthy*, while `unless` evaluates if the variable is *falsy*. The syntax for `if` and `unless` is as follows: `#{if VariableName}conditional statements#{/if}` `#{unless VariableName}conditional statements#{/unless}` Let's look at an example. Given the variables: | Name | Value | Scope | | -------------- | ------- | ---------- | | `DebugEnabled` | `True` | Dev | | `DebugEnabled` | `False` | Production | Then the following template: ```powershell ``` The resulting text in the *Dev* environment will be: ```xml ``` And in *Production* it will be: ```xml ``` You could achieve a similar result, with a different default/fallback behavior, using the unless syntax: ```powershell ``` #### Using variable filters in conditionals \{#conditional-filters} It's possible to use [variable filters](/docs/projects/variables/variable-filters) to help create both complex run conditions and variable expressions, but there are limitations to be aware of. :::div{.warning} Using variable filters *inline* in the two [conditional statements](/docs/projects/variables/variable-substitutions/#conditionals) `if` and `unless` are **not supported**. ::: If you wanted to include a variable run condition to run a step *only* when the release had a prerelease tag matching `my-branch`, you might be tempted to use the `VersionPreReleasePrefix` [extraction filter](/docs/projects/variables/variable-filters/#extraction-filters) to write a condition like this: ``` #{if Octopus.Release.Number | VersionPreReleasePrefix == "my-branch"}true#{/if} ``` However, the evaluation of the statement would always return `False` as the syntax is not supported. Instead, you need to create a variable that includes the variable filter you want to use. For this example, let's assume it's named `PreReleaseBranch` with the value: ``` #{Octopus.Release.Number | VersionPreReleasePrefix} ``` Once you have created your variable, you can use it in your run condition like this: ``` #{if PreReleaseBranch == "my-branch"}True#{/if} ``` #### *Truthy* and *Falsy* Values \{#truthy-and-falsy} The `if`, `if-else` and `unless` statements consider a value to be *falsy* if it is undefined; an empty string; or (ignoring case and any leading or trailing whitespace) `False`, `No` or `0`. All other values are considered to be *truthy*. :::div{.warning} **All variables are strings** Note that when evaluating values, **all Octopus variables are strings** even if they look like numbers or other data types. ::: ### Complex syntax Additional conditional statements are supported, including `==` and `!=`. Using complex syntax you can have expressions like `#{if Octopus.Environment.Name == "Production"}...#{/if}` and `#{if Octopus.Environment.Name != "Production"}...#{/if}`, or: ```text #{if ATruthyVariable} Do this if ATruthyVariable evaluates to true #{else} Do this if ATruthyVariable evaluates to false #{/if} ``` #### OR Conditions It's common to want to check for more than one value in an Octopus variable. To achieve this, you can create an effective `OR` statement by combining an `if` with another `else` statement: ```text #{if Octopus.Environment.Name == "Development"} Do this if it's Development #{else} #{if Octopus.Environment.Name == "Test"} Do this if it's Test #{else} Do this if it's neither #{/if} #{/if} ``` This is the equivalent of checking the Environment name for Development or Test. ### Comparing one variable value with another Sometimes, you might want to compare one variable value with another. Given the variables: | Name | Value | Scope | |-----------------------|----------|-------------| | `Base.MaxLogLevel` | `ERROR` | | | `Environment.LogLevel`| `DEBUG` | `Dev` | | `Environment.LogLevel`| `INFO` | `Test` | | `Environment.LogLevel`| `ERROR` | `Staging` | | `Environment.LogLevel`| `ERROR` | `Production`| Using conditional syntax, you can compare the value in the `Base.MaxLogLevel` variable with the `Environment.LogLevel` variable value. Using the template: ```text #{if Environment.LogLevel == Base.MaxLogLevel}We are at the MAX!#{else}We have room to grow!#{/if} ``` The resulting text in both *Dev and Test* will be: ```text We have room to grow! ``` And in both *Staging and Production* it will be: ```text We are at the MAX! ``` :::div{.hint} Note both operands **don't** include the Octostache syntax denoting them as a variable e.g. `#{Environment.LogLevel}`. This is because within a conditional expression Octostache is already able to evaluate the operand as a variable value. ::: ### Run conditions Conditions can be used to control whether a given step in a deployment process actually runs. In this scenario the conditional statement should return true/false, depending on your requirements. Some examples are: `#{if Octopus.Environment.Name == "Production"}true#{/if}` would run the step only in Production. `#{if Octopus.Environment.Name != "Production"}true#{/if}` would run the step in all environments other than Production. `#{unless Octopus.Action[StepName].Output.HasRun == "True"}true#{/unless}` would run the step unless it has run before. This would be useful for preventing something like an email step from executing every time an auto deploy executed for new machines in an environment. It would be used in conjunction with the step calling `Set-OctopusVariable -name "HasRun" -value "True"` when it does run. ### Repetition \{#repetition} The `each` statement supports repetition over a set of variables, or over the individual values in a variable separated with commas. #### Iterating over sets of values More complex sets of related values are handled using multiple variables: | Name | Value | Scope | | ------------------------- | ---------------------- | ----- | | `Endpoint[A].Address` | `http://a.example.com` | | | `Endpoint[A].Description` | `Primary` | | | `Endpoint[B].Address` | `http://b.example.com` | | | `Endpoint[B].Description` | `Replica` | | Given the template: ```powershell Listening on: #{each endpoint in Endpoint} - Endpoint #{endpoint} at #{endpoint.Address} is #{endpoint.Description} #{/each} ``` The result will be: ```powershell Listening on: - Endpoint A at http://a.example.com is Primary - Endpoint B at http://b.example.com is Replica ``` #### Complex syntax with sets of values Sometimes, you might want to compare one variable value with another contained in a set of values. Given the variables: | Name | Value | |--------------------|---------------------------------------------------------------------------------------------------------| | `WidgetIdSelector` | `Widget-2` | | `MyWidgets` | `{"One":{"WidgetId":"Widget-1","Name":"Widget-One"},"Two":{"WidgetId":"Widget-2","Name":"Widget-Two"}}` | Using complex syntax, you can iterate over the values in the `MyWidgets` variable and find the entry with the value specified in the second variable `WidgetIdSelector`. Using the template: ```text #{each w in MyWidgets} '#{w.Value.WidgetId}': #{if w.Value.WidgetId == WidgetIdSelector}This is my Widget!#{else}No widget matched :(#{/if} #{/each} ``` The resulting text will be: ```text 'Widget-1': No widget matched :( 'Widget-2': This is my Widget! ``` :::div{.hint} **Tips:** - Note both operands **don't** include the Octostache syntax denoting them as a variable e.g. `#{WidgetIdSelector}`. This is because within a conditional expression Octostache is already able to evaluate the operands as variable values. - The template references `.Value` which is a property available when using [JSON repetition](/docs/projects/variables/variable-filters/#repetition-over-json). ::: #### Iterating over comma-separated values Given the variable: | Name | Value | Scope | | ----------- | --------------------------------------------- | ----- | | `Endpoints` | `http://a.example.com,http://b.example.com` | | And the template: ```powershell Listening on: #{each endpoint in Endpoints} - #{endpoint} #{/each} ``` The resulting text will be: ```powershell Listening on: - http://a.example.com - http://b.example.com ``` #### Special variables \{#special-variables} Within the context of an iteration template, some special variables are available. | Name | Description | | ----------------------------- | ------------------------------------------------------------------------- | | `Octopus.Template.Each.Index` | Zero-based index of the iteration count | | `Octopus.Template.Each.First` | `"True" if the element is the first in the collection`, otherwise "False" | | `Octopus.Template.Each.Last` | "True" if the element is the last in the collection, otherwise "False" | Given the variable created as an index (comma separated): | Name | Value | | ----------- | ---------------------------------------- | | `Endpoints` | `SV1,SV2,SV3` | And the template: ```powershell #{each endpoint in Endpoints} #{if Octopus.Template.Each.First} write-host 'This is the first item in the Index : ' #{endpoint} #{/if} #{if Octopus.Template.Each.Last} write-host 'This is the last item in the Index : ' #{endpoint} #{/if} ``` The resulting text will be: ```powershell This is the first item in the Index : SV1 This is the last item in the Index : SV3 ``` ### Further examples If you're struggling with a specific syntax or OctoStache construct, you can find more examples in the unit tests defined for the library on GitHub: [OctoStache Tests UsageFixture](https://github.com/OctopusDeploy/Octostache/blob/master/source/Octostache.Tests/UsageFixture.cs). ### Filters \{#filters} The following filters are available: - ToLower - ToUpper - ToBase64 - HtmlEscape - XmlEscape - JsonEscape - YamlSingleQuoteEscape - YamlDoubleQuoteEscape - PropertiesKeyEscape - PropertiesKeyEscape - Markdown - NowDate - NowDateUtc - Format - Replace - Trim - Truncate - Substring The filters can be invoked in the following way: ```powershell #{Octopus.Environment.Name | ToLower} ``` For more information, see [Variable Filters](/docs/projects/variables/variable-filters). ## Older versions The `calc` operator is available from Octopus Deploy **2023.2** onwards. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # Converting projects to Git Source: https://octopus.com/docs/projects/version-control/converting.md Git settings are configured per project and are accessed via the **Settings ➜ Version Control** link in the project navigation menu. This page will walk through how to convert a project to Git. :::figure ![Version-control configuration UI](/docs/img/projects/version-control/converting/version-control-configuration.png) ::: ## Creating a new version-controlled project To get a feel for the config-as-code feature, you may want to create a new project that you can test before committing to permanently converting an existing project. This project's deployment process, deployment settings, runbook processes, and non-sensitive variables will be stored in a Git repository when configured. Click the **New Project** button and select **Use version control for this project.** :::figure ![adding a project using vcs](/docs/img/projects/version-control/converting/add-project-vcs.png) ::: Once you click the **Save** button, you'll be sent to the version control screen to configure your version control settings. Enter the URL for your Git repository, your username and password / personal access token. :::div{.hint} Different VCS providers require different URL formats for the Git repository, some (e.g. GitLab) require the URL include `.git` at the end while others (e.g. Azure DevOps) does not support this. GitHub supports either format. The best option to get the correct URL is to go to the repository in your provider and copy the URL used for cloning the repository from there. ::: Learn more about [Git credentials in Octopus Deploy](/docs/projects/version-control/config-as-code-reference). Next, add the directory you would like Octopus to store the project configuration. You can connect multiple projects to the same repository if they all use a different subdirectory (e.g. `.octopus/acme` and `.octopus/another-project`). :::div{.hint} You can have multiple deployment processes in the same repository if they all use a different subdirectory. ::: Finally, add your default branch name in Branch Settings and click **Configure**. Once you press the **Configure** button, a modal window will appear to confirm this change and give you the option to provide a summary and description for the first commit or cancel the conversion. :::figure ![configuring version control](/docs/img/projects/version-control/converting/configure-version-control.png) ::: Your project is now configured with Version Control. You can see this change reflected on the left navigation of the page, where you can change branches. You can also confirm this in your Git repository. The `.octopus` directory will now be created, and it should contain the following files and folders: - _deployment_process.ocl_ - _deployment_settings.ocl_ - _variables.ocl_ - _schema_version.ocl_ - _runbooks/_ The runbooks/ directory will contain runbook-name.ocl files for any published runbooks. If your repository has branch protection setup, see [Setting up in a repository with protected branches](/docs/projects/version-control/converting/#setting-up-in-a-repository-with-protected-branches). ## Configuring an existing project to use Git :::div{.warning} Converting a project to use Git is a one-way change. Once you convert a project Git, you **cannot** convert it back. Please make sure you want to do this, and consider cloning your project to test how it works, so you know what to expect before converting important projects. ::: Using config-as-code, you can perform a one-way conversion of existing projects to leverage Git. Select the project you would like to convert and click on the **Settings ➜ Version Control** link on the project navigation menu. Enter the connection information for your Git repository. You need to provide: - The URL for your Git repository - A username and password / personal access token (or anonymous for a public repository) - The directory you would like Octopus to store the deployment process in - The name of the default branch Learn more about [Git credentials in Octopus Deploy](/docs/projects/version-control/config-as-code-reference). :::div{.hint} You can have multiple deployment processes in the same repository if they all use a different subdirectory. ::: Once you press the **Configure** button, a modal window will appear to confirm this change and give you the option to provide a summary and description for the first commit or cancel the conversion. :::figure ![configuring version control](/docs/img/projects/version-control/converting/configure-version-control.png) ::: Your project is now configured with Version Control. You can see this change reflected on the left navigation of the page, where you can change branches. You can also confirm this in your Git repository. The `.octopus` directory will now be created, and it should contain the following files and folders: - _deployment_process.ocl_ - _deployment_settings.ocl_ - _variables.ocl_ - _schema_version.ocl_ - _runbooks/_ The runbooks/ directory will contain runbook-name.ocl files for any published runbooks. If your repository has branch protection setup, see [Setting up in a repository with protected branches](/docs/projects/version-control/converting/#setting-up-in-a-repository-with-protected-branches). ## Setting up a repository with protected branches If your default branch is protected, you can select that option under Branch Settings. You will need to provide a different branch name for the initial commit. If the branch doesn't exist, it will be created. Once you click the **Configure** button, Octopus will commit the OCL file to the initial commit branch. :::figure ![initial commit branch and protected default branch](/docs/img/projects/version-control/converting/configure-initial-commit-branch.png) ::: Next, you will need to merge your changes into the default branch in your Git provider using your usual workflow. You will not be able to use the default branch within the project until you have merged your changes from the initial commit branch to the default branch. However, you can continue to make changes to the initial commit branch until then. Optionally, you can also nominate protected branches for your Project. This will prevent users from committing directly to the nominated branches from the Octopus UI and encourage them to create a new branch instead. To nominate protected branches, type in the name or a wildcard pattern in the Protected Branches Pattern field under Branch Settings. This will apply to all existing and future branches. *Note that this is independent of your branch protection rules in your Git Provider and does not offer any protection outside of the Octopus UI.* :::figure ![protected branches](/docs/img/projects/version-control/converting/configure-protected-branches.png) ::: ## Migrating to config-as-code runbooks on an existing Git project Projects which converted to Git before the introduction of config-as-code runbooks can be easily updated. You can [migrate an existing version controlled project](/docs/runbooks/config-as-code-runbooks#cac-runbooks-on-an-existing-version-controlled-project) to use config-as-code runbooks by clicking on the 'Store Runbooks in Git' banner at the top of the **Runbooks** page of your project. ## Migrating variables on an existing Git project Since the initial public release of config-as-code, we've added support for additional project configuration in Git. You can now [migrate non-sensitive variables to Git](/docs/projects/version-control/converting/migrating-variables). ## Not everything is saved to version control The Configuration as Code feature is per-project. Currently, the deployment process, runbook processes, settings, and non-sensitive variables are saved to version control. A number of project-level and instance-level settings will not be stored in version control. Learn more about [what is stored in version control](/docs/projects/version-control/config-as-code-reference). ## Using a project with version control enabled In general, modifying a project via the Octopus UI with version control enabled is the same as modifying a project configured to save changes to SQL Server. However, there are some minor differences. Learn more about [Editing a project with version control enabled](/docs/projects/version-control/editing-a-project-with-version-control-enabled). # Provision an AWS RDS instance Source: https://octopus.com/docs/runbooks/runbook-examples/aws/create-rds.md AWS Relational Database Service (RDS) is a managed database server in the cloud. RDS provides a cost-efficient, relational database and manages common database administration tasks. Using a runbook, Octopus makes it easy to provide an automated method for creating RDS instances. In this example, we'll use the built-in steps of Octopus Deploy to create an AWS PostgreSQL RDS instance. ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add a **Run an AWS CLI script** step. :::div{.info} This example assumes that you already have a Virtual Private Cloud (VPC), subnets, and security groups created. The ID's of these resources will be needed for our RDS instance. ::: 5. Paste in the following example code, this will find the VPC, subnet, and security group ID values and assign them to output variables to be used later: ```powershell # Get reference to VPC $vpcList = $(aws ec2 describe-vpcs --filter Name=tag:Name,Values=#{AWS.CloudFormation.VPC.Name}) | ConvertFrom-Json # Check to see if anything was returned if (($null -eq $vpcList)) { Write-Error "Failed retrieving vpc list." } # Get VPC Id $vpcId = $vpcList.Vpcs[0].VpcId Write-Output "Found VPC: $vpcId ..." # Get Subnets reference $subnetList = $(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpcId) | ConvertFrom-Json # Get the subnet ids $subnet1Id = $subnetList.Subnets[0].SubnetId $subnet2Id = $subnetList.Subnets[1].SubnetId Write-Output "Found Subnet1: $subnet1Id and Subnet2: $subnet2Id ..." # Get reference to security group $securityGroupList = $(aws ec2 describe-security-groups --filter Name=vpc-id,Values=$vpcId,Name=tag:Name,Values=#{AWS.CloudFormation.SecurityGroup.Name}) | ConvertFrom-Json # Get the security group id $securityGroupId = $securityGroupList.SecurityGroups[0].GroupId Write-Output "Found Security Group: $securityGroupId ..." # Create output variables Set-OctopusVariable -name "AWS.VPC.Id" -value $vpcId Set-OctopusVariable -name "AWS.Subnet1.Id" -value $subnet1Id Set-OctopusVariable -name "AWS.Subnet2.Id" -value $subnet2Id Set-OctopusVariable -name "AWS.SecurityGroup.Id" -value $securityGroupId ``` 6. Add a **Deploy an AWS CloudFormation template** step. 7. Fill in the parameters for the step: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | AWS Account | The AWS account to use | This will be a variable defined in either Project variables or a Variable Set | | Region | The region your resources will be located in | us-west-1 | | CloudFormation stack name | Name of the stack you're creating | MySuperStack | | Role ARN | The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that AWS CloudFormation assumes when executing any operations. This role will be used for any future operations on the stack. | MyARN | | Select IAM Capability | Capability of IAM | Use dropdown to select capability | | Disable rollback | Whether or not you want to automatically rollback if the create failed | Checked | 8. Paste in the following template code: :::div{.info} Note the use of Octostache variables, you will need to make sure you create these for this example to work. You will also see use of the output variables created in the previous step. ::: ```yaml AWSTemplateFormatVersion: 2010-09-09 Resources: DatabaseSubnetGroup: Type: 'AWS::RDS::DBSubnetGroup' Properties: DBSubnetGroupDescription: 'Subnet group for database instance' SubnetIds: - #{Octopus.Action[ResourceIds].Output.AWS.Subnet1.Id} - #{Octopus.Action[ResourceIds].Output.AWS.Subnet2.Id} Database: Type: 'AWS::RDS::DBInstance' Properties: DBInstanceIdentifier: #{AWS.CloudFormation.RDS.Identifier} AllocatedStorage: #{AWS.CloudFormation.Database.AllocatedStorage} DBInstanceClass: #{AWS.CloudFormation.Database.Instance.Class} Engine: #{AWS.CloudFormation.Database.Engine} EngineVersion: #{AWS.CloudFormation.Database.Engine.Version} MasterUsername: #{AWS.CloudFormation.Database.Admin.User.Name} MasterUserPassword: #{AWS.CloudFormation.Database.Admin.User.Password} DBSubnetGroupName: !Ref DatabaseSubnetGroup PubliclyAccessible: true VPCSecurityGroups: - #{Octopus.Action[ResourceIds].Output.AWS.SecurityGroup.Id} Port: #{AWS.CloudFormation.PostgreSQL.Port} BackupRetentionPeriod: 0 Outputs: Endpoint: Description: Generated endpoint address for database connection Value: !GetAtt Database.Endpoint.Address ``` In just a few steps, we've automated the creation of a PostgreSQL RDS instance. ## Samples We have a [Target - PostgreSQL](https://oc.to/TargetPostgreSQLSampleSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `Space Infrastructure` project. # Deploy an Azure Resource Manager template Source: https://octopus.com/docs/runbooks/runbook-examples/azure/resource-groups.md From [Authoring Azure Resource Manager Templates](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview): > Azure applications typically require a combination of resources (such as a database server, database, or website) to meet the desired goals. Rather than deploying and managing each resource separately, you can create an Azure Resource Manager template that deploys and provisions all resources for your application in a single, coordinated operation. Octopus Deploy supports deploying Azure Resource Manager (ARM) templates via the *Deploy an Azure Resource Manager template* step type. For information about adding a step to the deployment process, see the [add step](/docs/projects/steps) section. The instructions there apply equally to a runbook process too. ## Create Azure resources runbook To create a runbook to deploy resources to Azure using the *Deploy an Azure Resource Manager template* step: 1. Navigate to your Project, then **Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. :::div{.hint} Before creating the step, you must have created an [Azure Service Principal Account](/docs/infrastructure/accounts/azure/#azure-service-principal). ::: 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add the step by clicking **Azure ➜ Deploy an Azure Resource Manager template**, or search for the step. ![Locate ARM step](/docs/img/runbooks/runbook-examples/azure/resource-groups/locate-arm-step.png) 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Azure** section, choose the [Account](/docs/infrastructure/accounts/azure) to use. ![Azure Account variable](/docs/img/runbooks/runbook-examples/azure/resource-groups/azure-account.png) :::div{.hint} [Azure accounts](/docs/infrastructure/accounts/azure/) can be referenced in a project through a project [variable](/docs/projects/variables) of the type **Azure account**. The step will allow you to bind the account to an **Azure account** variable, using the [binding syntax](/docs/projects/variables/#use-variables-in-step-definitions). By using a variable for the account, you can have different accounts used across different environments or regions using [scoping](/docs/projects/variables/#use-variables-in-step-definitions). ::: 1. Select the **Resource Group** to place the created resources in. This can be selected from the drop-down of available resources or bound to a variable. The resource group must exist when the step is executed. 1. Set the **Deployment Mode**. It can be either [Incremental or Complete](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-resource-group?tabs=azure-cli). 1. Choose the **Template Source**. It can be either [JSON entered directly](#json-template) into the step, or a file [contained in a package](#packaged-template). 1. Enter any values for parameters if they are present. Configure any other settings for the step such as Environment run conditions and click **SAVE**. :::figure ![Azure ARM step](/docs/img/runbooks/runbook-examples/azure/resource-groups/azure-arm-process-step.png) ::: ### Template entered as JSON {#json-template} By selecting *Source Code* as the *Template Source*, you can enter your template directly as JSON. The JSON will be parsed, and your parameters will appear dynamically as fields in the *Parameters* section. The parameter fields will show text boxes or select-lists as appropriate. You can enter values directly, or bind the parameters to Octopus Variables (e.g. see the *siteName* parameter in the image above). :::div{.success} Octopus will perform [variable-substitution](/docs/projects/variables/variable-substitutions) on the JSON template. Although you can use variables directly in the template, it is more idiomatic to use parameters, and plug the variables into those (as seen above). This will allow you to use or test your template outside of Octopus Deploy. ::: :::figure ![](/docs/img/runbooks/runbook-examples/azure/resource-groups/arm-json-template.png) ::: ### Sensitive data :::div{.warning} Parameters marked as [secure strings](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) represent sensitive data and it is important to make sure they aren't stored in plain text form. ::: The field displayed when "From Octopus" option is selected stores data as plain text so sensitive data shouldn't be typed directly into it. Instead, the value of the parameter should be provided either via a [Sensitive Variable](/docs/projects/variables/sensitive-variables/) if the value is stored in Octopus or via [Azure Key Vault](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/key-vault-parameter) if the value is stored outside of Octopus. Azure Resource Group Templates provide [out of the box integration with Azure Key Vault](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/key-vault-parameter?tabs=azure-cli). :::figure ![](/docs/img/runbooks/runbook-examples/azure/resource-groups/arm-sensitive-data.png) ::: ### Template contained in a package {#packaged-template} By selecting *Package* as the *Template Source*, you can select a package which will contain your template and parameter JSON files. :::figure ![](/docs/img/runbooks/runbook-examples/azure/resource-groups/arm-package-source-template.png) ::: The Template Path and Parameters Path fields should contain the relative path to these files within the package. :::div{.success} Octopus will perform [variable-substitution](/docs/projects/variables/variable-substitutions) on both the Template and Parameter files. ::: #### Parameter file format The Parameter JSON file can be in one of two formats: - With Schema - Without Schema **Example with Schema** ```json { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json", "contentVersion": "1.0.0.0", "parameters": { "collation": { "value": "SQL_Latin1_General_CP1_CI_AS" }, "administratorLoginPassword": { "value": "#{PasswordStoredAsSensitiveVariableInOctopus}" }, "administratorLogin": { "value": "#{Login}" }, "databaseName": { "value": "#{DatabaseName}" }, "anotherSecretStoredInAzureKeyVault": { "reference": { "keyVault": { "id": "#{KeyVaultResourceId}" }, "secretName": "SecretName" } } } } ``` **Example without Schema** ```json { "collation": { "value": "SQL_Latin1_General_CP1_CI_AS" }, "administratorLoginPassword": { "value": "#{PasswordStoredAsSensitiveVariable}" }, "administratorLogin": { "value": "admin" }, "databaseName": { "value": "#{DatabaseName}" }, "anotherSecretStoredInAzureKeyVault": { "reference": { "keyVault": { "id": "#{KeyVaultResourceId}" }, "secretName": "SecretName" } } } ``` ### Accessing ARM template output parameters {#arm-template-out-params} Any outputs from the ARM template step are made available as [Octopus output-variables](/docs/projects/variables/output-variables) automatically. For example, an output `Foo` would be available as: ```powershell Octopus.Action[Arm Template Step Name].Output.AzureRmOutputs[Foo] ``` Note, you need to replace **Arm Template Step Name** with the name of your ARM step template. ### Using linked templates Azure Resource Manager supports the concept of [linking templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-linked-templates). In this model you create a main template which links to other templates and parameters files via URI. This can be a really useful way to break your ARM templates into manageable components. In this case you would configure Octopus to deploy your main template, and the Azure Resource Manager will download any linked templates and parameters files as required to complete the deployment. :::div{.hint} **Linked templates must be publicly accessible via URI** Please be aware that the URI you configure for the linked templates and parameters files must be publicly accessible by the Azure Resource Manager. For example: [http://www.contoso.com/AzureTemplates/newStorageAccount.json](http://www.contoso.com/AzureTemplates/newStorageAccount.json) Learn more about [linked templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-linked-templates) and refer to [this discussion](https://help.octopus.com/t/azure-resource-management-templates/9654) for more details. ::: ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # Backup SQL database Source: https://octopus.com/docs/runbooks/runbook-examples/databases/backup-mssql-database.md Backing up databases to protect application data should be a common practice in most organizations. Using a Runbook in Octopus can make this process easy and simple allowing you to run backups ad-hoc or on a [scheduled trigger](/docs/runbooks/scheduled-runbook-trigger). ## Permissions In this example, you will be backing up a Microsoft SQL Server database using a step template from our [community library](/docs/projects/community-step-templates) called [SQL - Backup Database](https://library.octopus.com/step-templates/34b4fa10-329f-4c50-ab7c-d6b047264b83/actiontemplate-sql-backup-database). This template supports both: - SQL Authentication. - Integrated Authentication. In this example, we'll use SQL Authentication and provide both a SQL username and password. It's important to check that you have the correct permissions to perform the backup. You can find more information on this [here](/docs/deployments/databases/sql-server/permissions). ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the Runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a new step template from the community library called **SQL - Backup Database**. 5. Fill out all the parameters in the step. It's best practice to use [variables](/docs/projects/variables) rather than entering the values directly in the step parameters. | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Server | Database connection string | `dbserver01` | | Database | Name of database to backup | `mydatabase` | | Backup Directory | Path to backup data file to | C:\backups\ | | SQL Login | SQL Username | admin | | SQL Password | SQL Password | Pa$$word | | Compression Option | Disable or enable compression | Enabled | | Devices | Number of backup devices to use for backup | 1 | | Backup File Suffix | Suffix added to backup file name |prod | | Connection Timeout | How long the backup should run | 3600 | | Backup Action | Full or incremental backup| FULL | | Copy Only | Just do a copy only backup | True | :::div{.warning} Use variables where possible so you can assign scopes to values. This will ensure credentials and database connections are correct for the environment you're deploying to. ::: After adding all required parameters, click **Save**, and you have a basic runbook to backup your SQL database! You can also add additional steps to add security to your runbooks, such as a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step for business approvals. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. ## Learn More - [SQL Backup - Community Step template](https://library.octopus.com/step-templates/34b4fa10-329f-4c50-ab7c-d6b047264b83/actiontemplate-sql-backup-database) # Manually failover DNS Source: https://octopus.com/docs/runbooks/runbook-examples/emergency/manually-failover-dns.md Power outages, natural disasters, or fiber lines being cut in construction projects are just a few things that can cause outages. One of the most common Disaster Recovery (DR) methods is to have a secondary site where you can update the Domain Name System (DNS) record and be back online. :::div{.info} Updating the IP address of a DNS entry is quick and easy, but you are at the mercy of those changes being propagated throughout the Internet. ::: ## Infrastructure as a Service (IaaS) DNS failover Popular IaaS providers such as Azure, AWS, or GCP provide a CLI to make it easy update your DNS record to point to another site with just a couple of commands. The following example uses the Azure CLI to update the DNS record for www.octopussamples.com ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a new **Run an Azure Script** step. 5. Choose **Inline source code (with optional package references)** 6. Enter the following PowerShell code, we recommend using [variables](/docs/projects/variables) instead of hard-coding entries. ```powershell $resourceGroup = $OctopusParameters["OctoFX.Azure.Resource.Group"] $zoneName = $OctopusParameters["OctoFX.DNS.Name"] $ipAddressDR = $OctopusParameters["OctoFX.DR.IP.Address"] $ipAddressProd = $OctopusParameters["OctoFX.Production.IP.Address"] az network dns record-set a add-record --resource-group $resourceGroup --zone-name $zoneName --record-set-name www --ipv4-address $ipAddressProd az network dns record-set a remove-record --resource-group $resourceGroup --zone-name $zoneName --record-set-name www --ipv4-address $ipAddressDR ``` This script within a runbook lets your switch your DNS entry with the click of a button, so no matter when disaster strikes, it is easy to recover. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. # Create Network Load Balancer Source: https://octopus.com/docs/runbooks/runbook-examples/gcp/create-nlb.md Google Cloud (GCP) has a [Network Load Balancing solution](https://cloud.google.com/load-balancing/docs/network/) that allows you to distribute traffic among virtual machine instances in the same region in a Virtual Private Cloud (VPC) network. A network load balancer can direct TCP or UDP traffic across regional backends. The other benefit of a network load balancer in GCP is that it supports any and all ports. In this example, we'll walk through how to create a runbook with a number of [PowerShell Script steps](/docs/deployments/custom-scripts/run-a-script-step) to create a network load balancer in GCP for both a test and production environment using ports to differentiate traffic: - Port `8080` is used for traffic destined for the test environment. - Port `80` is used for traffic destined for the production environment. ## Runbook pre-requisites {#runbook-prerequisites} In order to execute this runbook successfully, there are a couple of pre-requisites: - [Google Cloud CLI](#gcloud-cli) - [Google Cloud authorization](#gcloud-authorization) ### Google Cloud CLI {#gcloud-cli} In order to access Google Cloud, you usually have to use tools such as the [Google Cloud CLI](https://cloud.google.com/sdk/gcloud), which this runbook uses. This example assumes you have either the gcloud CLI installed on the machine where you run the runbook, or that you are using [execution containers for workers](/docs/projects/steps/execution-containers-for-workers) with an image that includes the gcloud CLI. ### Google Cloud authorization {#gcloud-authorization} The gcloud CLI needs to be authorized to access and manage resources in Google Cloud. This example assumes that you already have a Google Cloud [service account](https://cloud.google.com/docs/authentication#service_accounts) that can be used, as the commands used here make use of the gcloud CLI, which must be authorized before it can be used. For further information on gcloud authorization, please refer to the [gcloud documentation](https://cloud.google.com/sdk/docs/authorizing). The next sections explains how to configure a service account to be authorized to use the gcloud CLI. #### Create project variables {#gcp-project-variables} We'll use project [variables](/docs/projects/variables/) to authorize the gcloud CLI with Google Cloud with the help of a PowerShell function included in a [Script module](/docs/deployments/custom-scripts/script-modules). Create two [sensitive variables](/docs/projects/variables/sensitive-variables), one for the service account email, and the other will contain the service account key. This is a JSON payload you obtain when creating the service account in Google Cloud: ![Google Cloud Project variables](/docs/img/runbooks/runbook-examples/gcp/images/gcp-auth-project-variables.png) #### Create authorization function in script module The instructions at [Creating a script module](/docs/deployments/custom-scripts/script-modules/#create-script-module) detail the procedure for creating a script module in Octopus. In the **Body** of the script module, include the following PowerShell code: :::div{.hint} Note the use of the `Project.GCP.ProjectName` variable which also needs to be created in your project. The value defines the scope of the project in Google Cloud you are authorizing the service account for. ::: ```powershell function Set-GCPAuth() { $JsonKey = $OctopusParameters["GCP.ServiceAccount.Key"] $JsonFile = [System.IO.Path]::GetTempFileName() if (Test-Path $JsonFile) { Remove-Item $JsonFile -Force } New-Item $JsonFile -Type "File" -Force $JsonKey | Set-Content $JsonFile $gcpServiceAccountEmail = $OctopusParameters["GCP.ServiceAccount.Email"] $gcpProjectName = $OctopusParameters["Project.GCP.ProjectName"] Write-Host "Activating service account $gcpServiceAccountEmail" Write-Host "##octopus[stderr-progress]" gcloud auth activate-service-account $gcpServiceAccountEmail --key-file=$JsonFile --project=$gcpProjectName --quiet Test-LastExit "gcloud auth activate-service-account" if (Test-Path $JsonFile) { Write-Host "Clearing up temp auth file" Remove-Item $JsonFile -Force } } ``` This script defines a function named `Set-GCPAuth` which uses the `auth activate-service-account` command that is used in the runbook steps to authorize with Google Cloud. Add the script module into your runbook process following [these instructions](/docs/deployments/custom-scripts/script-modules/#use-script-module-for-deployment): ![Google Cloud Project variables](/docs/img/runbooks/runbook-examples/gcp/images/gcp-runbook-include-script-module.png) ## Create the runbook {#create-runbook} 1. To create the runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. Next, we'll add the steps to create the network load balancer. ### Create IP address for load balancer step {#create-ip-address-step} To add the step for creating the IP address for the load balancer: 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```powershell # Activate service account Set-GCPAuth $projectName = $OctopusParameters["Project.GCP.ProjectName"] $region = $OctopusParameters["GCP.Region"] $loadBalancerIPName = $OctopusParameters["Project.GCP.LoadBalancer.ExternalIP.Name"] $networkTier = $OctopusParameters["Project.GCP.LoadBalancer.NetworkTier"] Write-Host "Getting compute address matching name: $loadBalancerIPName" Write-Host "##octopus[stderr-progress]" $ipAddress=(& gcloud compute addresses list --project=$projectName --filter="name=($loadBalancerIPName)" --format="get(address)" --quiet) Test-LastExit "gcloud compute addresses list" if( -not ([string]::IsNullOrEmpty($ipAddress))) { Write-Highlight "Found $loadBalancerIPName of: $ipAddress" } else { Write-Highlight "Found no compute addresses matching: $loadBalancerIPName" $ipAddress=(& gcloud compute addresses create $loadBalancerIPName --project=$projectName --network-tier=$networkTier --region=$region --format="get(address)" --quiet) Test-LastExit "gcloud compute addresses create" if( -not ([string]::IsNullOrEmpty($ipAddress))) { Write-Highlight "Created new ip address of: $ipAddress for $loadBalancerIPName" } else { Write-Error "IP address could not be determined from attempted create!" } } ``` This script will check to see if an IP address matching the name specified in the variable `Project.GCP.LoadBalancer.ExternalIP.Name` already exists. If it does, it will complete as an IP address is present. If it doesn't exist, it will create a static IP address using the `compute addresses create` command. There are a number of variables used in the script: | Variable name | Description | Example | | ------------- | ------------- | ------------- | | Project.GCP.ProjectName | Project in Google Cloud. | my-project | | GCP.Region | The region to create the IP address in. | europe-west1 | | Project.GCP.LoadBalancer.ExternalIP.Name | The name of the IP address. | my-project-nlb-ip | | Project.GCP.LoadBalancer.NetworkTier | The network tier to assign to the reserved IP address. | PREMIUM or STANDARD | ### Create load balancer health-check step {#create-health-check-step} In order to know if your machines behind the network load balancer are healthy you need to include [health checks](https://cloud.google.com/load-balancing/docs/health-check-concepts). Add the step to create the necessary health checks for the load balancer: 1. Navigate to **Project ➜ Operations ➜ Runbooks**, and choose the runbook. 1. Click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```powershell # Activate service account Set-GCPAuth $projectName = $OctopusParameters["Project.GCP.ProjectName"] $testHealthCheckName = $OctopusParameters["Project.GCP.LoadBalancer.Test.HealthCheckName"] $productionHealthCheckName = $OctopusParameters["Project.GCP.LoadBalancer.Prod.HealthCheckName"] function CreateHealthCheckIfNotExists([string]$healthCheckName, [string] $healthCheckPort) { Write-Host "Getting compute http-health check matching name: $healthCheckName" Write-Host "##octopus[stderr-progress]" $listedPort=(& gcloud compute http-health-checks list --project=$projectName --filter="name=($healthCheckName)" --format="get(port)" --quiet) Test-LastExit "gcloud compute http-health-checks list" if( -not ([string]::IsNullOrEmpty($listedPort))) { Write-Highlight "Found existing http-health check named: $healthCheckName probing port: $listedPort" if($listedPort -ne $healthCheckPort) { Write-Warning "Existing http-health check port: $listedPort doesn't match expected port: $healthCheckPort" } } else { Write-Highlight "Found no http-health check named: $healthCheckName" $listedPort=(& gcloud compute http-health-checks create $healthCheckName --port=$healthCheckPort --project=$projectName --format="get(port)" --quiet) Test-LastExit "gcloud compute http-health-checks create" if([string]::IsNullOrEmpty($listedPort)) { Write-Error "Port for new http-health check couldn't be determined from attempted create!" } } } CreateHealthCheckIfNotExists $testHealthCheckName "8080" CreateHealthCheckIfNotExists $productionHealthCheckName "80" ``` This script will check to see if the health checks exist for both test and production. If they do, it will skip creating that environment's health check. If they don't exist, it will create a new HTTP health check using the `compute http-health-checks create` command. There are a number of variables used in the script: | Variable name | Description | Example | | ------------- | ------------- | ------------- | | Project.GCP.ProjectName | Project in Google Cloud. | my-project | | Project.GCP.LoadBalancer.Test.HealthCheckName | The name of the test environment health check. | my-project-lb-health-http-8080 | | Project.GCP.LoadBalancer.Prod.HealthCheckName | The name of the prod environment health check. | my-project-lb-health-http-80 | ### Create load balancer target pools step {#create-target-pools-step} As we are creating a single load balancer that routes traffic for both the test and production environment we want to avoid re-using the same virtual machines. We use dedicated target pools for the test and production environments to do this. A [target pool](https://cloud.google.com/load-balancing/docs/target-pools) is the name given to a group of virtual machine instances hosted in Google Cloud. Add the step to create the necessary target pools for the load balancer: 1. Navigate to **Project ➜ Operations ➜ Runbooks**, and choose the runbook. 1. Click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```powershell # Activate service account Set-GCPAuth $projectName = $OctopusParameters["Project.GCP.ProjectName"] $region = $OctopusParameters["GCP.Region"] $testHealthCheckName = $OctopusParameters["Project.GCP.LoadBalancer.Test.HealthCheckName"] $productionHealthCheckName = $OctopusParameters["Project.GCP.LoadBalancer.Prod.HealthCheckName"] $testTargetPoolName = $OctopusParameters["Project.GCP.LoadBalancer.Test.TargetPoolName"] $productionTargetPoolName = $OctopusParameters["Project.GCP.LoadBalancer.Prod.TargetPoolName"] function CreateLoadBalancerTargetPoolIfNotExists([string]$targetPoolName, [string] $healthCheckName) { Write-Host "Getting compute target-pools matching name: $targetPoolName" Write-Host "##octopus[stderr-progress]" $listedPoolName=(& gcloud compute target-pools list --project=$projectName --filter="region:($region) AND name=($targetPoolName)" --format="get(name)" --quiet) Test-LastExit "gcloud compute target-pools list" if( -not ([string]::IsNullOrEmpty($listedPoolName))) { Write-Highlight "Found existing target pool named: $listedPoolName" } else { Write-Highlight "Creating new target pool named: $targetPoolName as no existing match." $listedPoolName=(& gcloud compute target-pools create $targetPoolName --region=$region --http-health-check=$healthCheckName --project=$projectName --format="get(name)" --quiet) Test-LastExit "gcloud compute target-pools create" if([string]::IsNullOrEmpty($listedPoolName)) { Write-Error "Name for new target pool couldn't be determined from attempted create!" } } } CreateLoadBalancerTargetPoolIfNotExists $testTargetPoolName $testHealthCheckName CreateLoadBalancerTargetPoolIfNotExists $productionTargetPoolName $productionHealthCheckName ``` This script will check to see if the target pools exist for both Test and Production. If they do, it will skip creating that environment's pool. If they don't exist, it will create a new target pool using the `compute target-pools create` command. There are a number of variables used in the script: | Variable name | Description | Example | | ------------- | ------------- | ------------- | | Project.GCP.ProjectName | Project in Google Cloud. | my-project | | GCP.Region | The region to create the target pools in. | europe-west1 | | Project.GCP.LoadBalancer.Test.HealthCheckName | The name of the test environment health check. | my-project-lb-health-http-8080 | | Project.GCP.LoadBalancer.Prod.HealthCheckName | The name of the prod environment health check. | my-project-lb-health-http-80 | | Project.GCP.LoadBalancer.Test.TargetPoolName | The name of the test environment target pool. | my-project-test-pool | | Project.GCP.LoadBalancer.Prod.TargetPoolName | The name of the prod environment target pool. | my-project-prod-pool | ### Create load balancer forwarding rules step {#create-forwarding-rules-step} In order to direct traffic that hits the load balancer to the correct backend target pool, we need to specify a [forwarding rule](https://cloud.google.com/load-balancing/docs/using-forwarding-rules) for each port. Add the step to create the necessary forwarding rules for the load balancer: 1. Navigate to **Project ➜ Operations ➜ Runbooks**, and choose the runbook. 1. Click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```powershell # Activate service account Set-GCPAuth $projectName = $OctopusParameters["Project.GCP.ProjectName"] $region = $OctopusParameters["GCP.Region"] $loadBalancerIPName = $OctopusParameters["Project.GCP.LoadBalancer.ExternalIP.Name"] $testTargetPoolName = $OctopusParameters["Project.GCP.LoadBalancer.Test.TargetPoolName"] $productionTargetPoolName = $OctopusParameters["Project.GCP.LoadBalancer.Prod.TargetPoolName"] $testForwardingRuleName = $OctopusParameters["Project.GCP.LoadBalancer.Test.ForwardingRule"] $productionForwardingRuleName = $OctopusParameters["Project.GCP.LoadBalancer.Prod.ForwardingRule"] $networkTier = $OctopusParameters["Project.GCP.LoadBalancer.NetworkTier"] function CreateForwardingRulesForTargetPoolIfNotExists([string] $forwardingRuleName, [string] $targetPoolName, [string] $port) { Write-Host "Getting compute forwarding-rules matching name: $forwardingRuleName" Write-Host "##octopus[stderr-progress]" $listedPortRange=(& gcloud compute forwarding-rules list --project=$projectName --filter="region:($region) AND name=($forwardingRuleName)" --format="get(portRange)" --quiet) Test-LastExit "gcloud compute forwarding-rules list" if( -not ([string]::IsNullOrEmpty($listedPortRange))) { Write-Highlight "Found existing forwarding-rule named: $forwardingRuleName with portRange: $listedPortRange" } else { Write-Highlight "Creating new forwarding-rule named: $forwardingRuleName for port: $port as no existing match" $listedPortRange=(& gcloud compute forwarding-rules create $forwardingRuleName --region=$region --ports=$port --address=$loadBalancerIPName --target-pool=$targetPoolName --project=$projectName --network-tier=$networkTier --format="get(portRange)" --quiet) Test-LastExit "gcloud compute forwarding-rules create" if([string]::IsNullOrEmpty($listedPortRange)) { Write-Error "Port Range for new forwarding-rule couldn't be determined from create!" } } } CreateForwardingRulesForTargetPoolIfNotExists $testForwardingRuleName $testTargetPoolName "8080" CreateForwardingRulesForTargetPoolIfNotExists $productionForwardingRuleName $productionTargetPoolName "80" ``` This script will check to see if the forwarding rules exist for both test and production. If they do, it will skip creating that environment's rule. If they don't exist, it will create a new forwarding rule using the `compute forwarding-rules create` command. There are a number of variables used in the script: | Variable name | Description | Example | | ------------- | ------------- | ------------- | | Project.GCP.ProjectName | Project in Google Cloud. | my-project | | GCP.Region | The region to create the rules in. | europe-west1 | | Project.GCP.LoadBalancer.ExternalIP.Name | The name of the IP address. | my-project-nlb-ip | | Project.GCP.LoadBalancer.Test.TargetPoolName | The name of the test environment target pool. | my-project-test-pool | | Project.GCP.LoadBalancer.Prod.TargetPoolName | The name of the prod environment target pool. | my-project-prod-pool | | Project.GCP.LoadBalancer.Test.ForwardingRule | The name of the test environment forwarding rule. | my-project-test-rule | | Project.GCP.LoadBalancer.Prod.ForwardingRule | The name of the prod environment forwarding rule. | my-project-prod-rule | | Project.GCP.LoadBalancer.NetworkTier | The network tier to assign to the forwarding rule. | PREMIUM or STANDARD | ### Create add machines to target pool step {#create-machines-add-step} Finally, in order to have a functioning load balancer, we need virtual machines to add to the target pools. :::div{.hint} This step assumes you have already created one or more Compute Engine instance in Google Cloud to add to the target pool, which follow a naming convention of `machine_name-number`. This is to allow multiple machines to be added to the target pool in a single step. ::: Add the step to add machines to a target pool for the load balancer: 1. Navigate to **Project ➜ Operations ➜ Runbooks**, and choose the runbook. 1. Click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```powershell # Activate service account Set-GCPAuth $projectName = $OctopusParameters["Project.GCP.ProjectName"] $zone = $OctopusParameters["GCP.Zone"] $targetPoolName = $OctopusParameters["Project.GCP.Targets.LoadBalancer.Pool"] $targetMachineName = $OctopusParameters["Project.GCP.Targets.VM.Name"] $instanceNumberRequired = [int]$OctopusParameters["Project.GCP.Targets.NumberRequired"] $instances=@() For($i=1; $i -le $instanceNumberRequired; $i++) { $instances += "$targetMachineName-$i" } $instanceCount = $instances.Length $instances = $instances -Join "," Write-Highlight "Adding $instanceCount instances to target-pool: $targetPoolName" Write-Host "Adding instances: $instances to target-pool: $targetPoolName" Write-Host "##octopus[stderr-progress]" $response=(& gcloud compute target-pools add-instances $targetPoolName --instances=$instances --instances-zone=$zone --project=$projectName --quiet) Test-LastExit "gcloud compute target-pools add-instances" Write-Host "Completed adding instances: $instances to target-pool: $targetPoolName" ``` This script will generate a list of machine names, and then add them to a target pool using the `compute target-pools` command. There are a number of variables used in the script: | Variable name | Description | Example | | ------------- | ------------- | ------------- | | Project.GCP.ProjectName | Project in Google Cloud. | my-project | | GCP.Zone | The zone where the machines are located. | europe-west1 | | Project.GCP.Targets.LoadBalancer.Pool | The name of the target pool to add machines to. | my-project-test-pool | | Project.GCP.Targets.VM.Name | The base name of the machine in GCP. Used with Project.GCP.Targets.NumberRequired to add multiple machines. | my-project-vm-name | | Project.GCP.Targets.NumberRequired | The number of machines to add to the pool. | my-project-vm-name | And that's it! In a few steps, you have a network load balancer set up in Google Cloud routing traffic to both test and production machines. ## Samples We have a [Pattern - Rolling](https://oc.to/PatternRollingSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at these runbook steps in the `PetClinic Infrastructure` project: - The runbook named `Configure GCP NLB Target Pools` includes all steps to create the network load balancer. - The step to add machines to a target pool is included in the runbook named `Spin up GCP PetClinic Project Infrastructure`. # Routine operations Source: https://octopus.com/docs/runbooks/runbook-examples/routine.md Octopus Deploy allows you to create and run runbooks for routine operations tasks. These tend to be procedures or routines that don't happen very frequently. Typical routine operations could be: - Installing Web Application software e.g [IIS](https://docs.microsoft.com/en-us/iis/get-started/introduction-to-iis/iis-web-server-overview), [Apache Tomcat](http://tomcat.apache.org/). - Stopping, Starting or Restarting a Website. - Installing Application Frameworks e.g. [.NET](https://dotnet.microsoft.com/), [Java](https://www.java.com/). - Renewing SSL certificates. # Runbooks vs Deployments Source: https://octopus.com/docs/runbooks/runbooks-vs-deployments.md For users familiar with Octopus prior to the introduction of runbooks, an obvious question may be _how are runbooks different to a deployment process?_ They are similar in many ways: a runbook process is a series of steps, which can reference packages and variables. The key differences are: - No release needs to be created to execute a runbook. - Lifecycles do not apply to runbooks. - Runbook executions are not displayed on the deployment dashboards. - Many runbooks can live in the same project, along with a deployment process. - Runbooks have different roles and permissions to deployments. ## Variables A [project's variables](/docs/projects/variables) are shared between the deployment process and any runbooks in the project (though specific values can be scoped exclusively to specific runbooks or to the deployment process). This means the following configurations can be shared between your deployment process and runbooks: - Database connection strings - Passwords - Certificates - Accounts ### Current limitations **Scoping to Steps/Actions** - You cannot currently scope project variables to a deployment process step and a runbook process step, but we do aim to support this in the near future. ## Environments In Octopus 2020.2 and earlier, runbooks could be executed against any environment for which the user had an appropriately scoped `RunbookRunCreate` permission. From **Octopus 2020.3**, it's also possible to choose which environments a runbook can be run in by selecting this from the *Run settings* in **Runbook ➜ Settings**: :::figure ![Runbook environments choice](/docs/img/runbooks/runbooks-vs-deployments/runbook-runsettings-environments.png) ::: You can select the runbook to run in: - All environments (the default). - Only specific environments. - Environments from the [Project Lifecycle](/docs/releases/lifecycles). :::div{.hint} In Octopus 2020.2 and earlier, if you need to restrict the environment that a runbook can be executed in, you can achieve this by adding an [Environment run condition](/docs/projects/steps/conditions/#environments) in each step of the runbook process. ::: ## Retention policy Project [Lifecycles](/docs/releases/lifecycles) and their retention policies do not apply to runbooks (only deployments). From **Octopus 2020.3**, it's possible to set a retention policy for a runbook by selecting this from the *Run settings* in **Runbook ➜ Settings**: :::figure ![Runbook retention policies](/docs/img/runbooks/runbooks-vs-deployments/runbook-runsettings-retention.png) ::: You can choose to: - Keep **all** of the runbook runs. - Keep a limited number of runbook runs (the default). The retention policy is applied **per environment**. For example, if you had three environments, Development, Staging and Production and you set the retention policy limit to 10, that would keep a total of **30** runbook runs - 10 in *each* of Development, Staging and Production. If you are using **config-as-code runbooks**, keep in mind that when a branch is deleted this includes any retention policies on that branch. The retention policy for any runbook runs made from that branch will then use the default time based retention policy (60 days). :::div{.hint} In Octopus 2020.2 and earlier, the runbook retention policy could not be set. Instead, Octopus would keep the last 1000 runs. ::: ## Snapshots versus Releases :::div{.success} Config-as-code runbooks use commits instead of snapshots. If your project uses config-as-code runbooks, read about [snapshots vs commits](/docs/runbooks/config-as-code-runbooks#snapshots-vs-commits) instead. ::: Runbooks are similar to deployments in that they also take a copy of the process to be used with execution. For a runbook this is referred to as a [snapshot](/docs/runbooks/runbook-publishing/#snapshots) versus a [release](/docs/releases) for a deployment. Runbooks can have two different types of snapshots: - Draft - Published :::div{.hint} **Package versions are included in a snapshot** Similar to releases, the version of any packages that are used in the runbook are also snapshotted. This means if a newer version of the package is uploaded, and you wish to use it in your runbook, you will need to create a new snapshot of the runbook. ::: # Moving your Octopus Server to another Active Directory domain Source: https://octopus.com/docs/security/authentication/active-directory/moving-active-directory-domains.md This page describes the steps and considerations to move your Octopus Server from one Active Directory domain to another. ## Steps We assume your Octopus Server and users are currently in the same domain - `Domain A`. It's also assumed that your users will remain in `Domain A`. 1. Update your infrastructure to move the server from one domain (`Domain A`) to another (`Domain B`). 2. Ensure that `Domain B` trusts `Domain A`. This can be either one way trust where `Domain B` trusts `Domain A` or two-way trust where `Domain B` trusts `Domain A` and `Domain A` trusts `Domain B`. This is largely a decision you and your infrastructure personnel need to decide. 3. Update your Octopus Server Windows service account if desired. If needed, you can update the account the Octopus Server Windows service is running under. If you select an account from `Domain B` then you need to ensure that there is a two-way trust relationship in place. If you do change the account, then you need to ensure that your Octopus Sql Server database grants this user access to the database as a `db_owner`. --- Assuming your users' email address, SAMAccountName or UPN do not change, then they should now be able to login normally. ## Notes: * If you move your users to the new domain as well, this should still work assuming that the user's email address, SAM or UPN do not change. If they do, then you'll need to update your instance to migrate your users/teams over. * Take care when moving Octopus to a new domain if you have mapped [external groups and roles](/docs/security/users-and-teams/external-groups-and-roles) to Teams in Octopus. Octopus stores the Active Directory security group SIDs, however, it does not store the Active Directory security group names. You need to ensure the new domain uses the same SIDs so that the groups and the users that belong to the groups will be recognized. Even if you use the same group names, Octopus will not recognize the group unless the SIDs are the same. * If you need to edit your groups, this can get complicated. Read our page on [external groups and roles](/docs/security/users-and-teams/external-groups-and-roles) for more information. # Microsoft Entra ID authentication Source: https://octopus.com/docs/security/authentication/azure-ad-authentication.md You can use Microsoft Entra ID, formerly known as Azure Active Directory (AAD), to authenticate when logging in to the Octopus Server. To use Microsoft Entra ID authentication with Octopus, you will need to do the following: 1. Configure Microsoft Entra ID to trust your Octopus Deploy instance by setting it up as an App in your Microsoft Azure Portal. 2. Optionally, map Entra ID Users into Roles so that users can be automatically connected to Octopus Teams. 3. Configure your Octopus Deploy instance to trust and use Microsoft Entra ID as an Identity Provider. :::div{.hint} If your Octopus database is running in Azure SQL, it's also possible to configure a Microsoft Entra ID Managed identity for use with your SQL database. See our [Using Microsoft Entra ID in Azure SQL](/docs/installation/sql-server-database/#using-aad-in-azure-sql) section for further information. ::: ## Configure Microsoft Entra ID First, you need to configure your Microsoft Entra ID to trust your instance of Octopus Deploy by configuring an App in your Microsoft Azure Portal. ### Configure Octopus Deploy as an App in your Azure Portal :::div{.success} **Get the right permissions for your Microsoft Entra ID tenant before starting** To configure your instance of Octopus Deploy as an App, you need administrator permissions for the desired Microsoft Entra ID tenant in your subscription. ::: 1. Log in to the [Azure Portal](https://portal.azure.com), click on your account positioned at the top-right of the screen, then select your desired directory: :::figure ![Switch Azure Directories](/docs/img/security/authentication/images/aad-portal.png) ::: 1. Select "All services" in the [Azure Portal](https://portal.azure.com) and select **Microsoft Entra ID** from the Azure menu: :::figure ![Open Microsoft Entra ID service](/docs/img/security/authentication/images/aad-service.png) ::: 1. In the top menu, select *Add** then choose **App registration**: :::figure ![New App registration](/docs/img/security/authentication/images/aad-new-app-registration.png) ::: 4. Choose a **Name** such as *Octopus Deploy*, select the correct **Supported account type** for Single or Multi-Tenant, choose **Web** from the drop down and enter a value for **Redirect URI** like `https://octopus.example.com/api/users/authenticatedToken/AzureAD`. Then click **Register**. - The URL must use HTTPS. - When users input their credentials, the value you specify for **Name** will appear at the top of the Azure authentication page. - The value you specify for **Redirect URI** should be the URL to your Octopus Server. This address is only linked within your browser, so it only has to be resolvable on your network, not from the public Internet. - Include `/api/users/authenticatedToken/AzureAD` at the end of your Octopus URL. :::div{.hint} Take care when you add this URL. They are **case-sensitive** and can be sensitive to trailing **slash** characters. You cannot use `HTTP` here and need to use `https`. You will need to use an SSL certificate from a Certificate Authority, such as [LetsEncrypt](https://letsencrypt.org/). You can do this by using Octopus Deploy [Let's Encrypt Integration](/docs/security/exposing-octopus/lets-encrypt-integration) or one from Active Directory Certificate Services. ::: :::figure ![Filling the App registration form](/docs/img/security/authentication/images/aad-new-app-registration-form.png) ::: #### Enable ID Tokens and configure :::div{.hint} Support for OAuth code flow with PKCE was introduced in **Octopus 2022.2.4498**. This step is **not required** for any newer versions of Octopus. Instead, we suggest following the instructions for [generating a client secret](#generate-the-client-secret) below. ::: 1. Within your new App registration in Microsoft Entra ID, navigate to Manage > Authentication. 2. Ensure the ID Tokens box is enabled: :::figure ![Enable ID Token](/docs/img/security/authentication/images/aad_id_token.png) ::: #### Enable Logout URL if using Single Sign-On (optional) 1. Within your new App registration in Microsoft Entra ID, navigate to Authentication. 2. Input logout URL and enter `https://octopus.example.com/app#/users/sign-out` substituting your URL. :::figure ![Configure Logout URL](/docs/img/security/authentication/images/aad_logout_url.png) ::: #### Mapping Microsoft Entra ID users into Octopus teams (optional) If you want to manage user/team membership via Microsoft Entra ID, you must configure Roles for your App. To add Role(s), you can create and assign a new App Role in the Azure Portal or directly edit the App's manifest. ##### Create and Assign a new Microsoft Entra ID App Role 1. On the left hand side menu, select **App roles** and click the **Create app role** button. ![Creating new App Role](/docs/img/security/authentication/images/aad-new-app-role-create.png) 2. Enter all required fields and click **Apply** to create the new app role. ![Apply App Role value and name](/docs/img/security/authentication/images/aad-new-app-role-create-apply.png) :::div{.hint} The **Value** property is the most important field. This value becomes the external Role ID you use later on when [adding this Role to a Team](/docs/security/users-and-teams/external-groups-and-roles/#ExternalGroupsandRoles-AddExternalRole) in Octopus Deploy. ::: ##### Edit Microsoft Entra ID App Manifest 1. Under the App Registration, select **Manifest**, and then you can start editing your manifest file as required. :::figure ![Editing an App registration manifest](/docs/img/security/authentication/images/aad-edit-app-registration-manifest.png) ::: The example below illustrates two roles, one for administrators and one for application testers. You need to create each required group in the Manifest file. :::div{.success} Make sure you replace the `NEWGUID`s with a generated GUID (unique per entry). You can find these online and use them as required. An example is the [Online GUID / UUID Generator](https://guidgenerator.com/). ::: ```json { "appId": "myAppGuid", "appRoles": [ { "id": "NEWGUID1", "allowedMemberTypes": ["User"], "description": "OctopusAdministrators", "displayName": "OctopusAdmins", "isEnabled": true, "value": "octopusAdmins" }, { "id": "NEWGUID2", "allowedMemberTypes": ["User"], "description": "OctopusTesters", "displayName": "OctopusTesters", "isEnabled": true, "value": "octopusTesters" } ] } ``` After you have completed editing the manifest, select the **Save** option. :::figure ![Saving an App registration manifest](/docs/img/security/authentication/images/aad-save-app-registration-manifest.png) ::: :::div{.hint} The **value** property is the most important one. This value becomes the external Role ID you use later on when [adding this Role to a Team](/docs/security/users-and-teams) in Octopus Deploy. ::: :::div{.success} **Want a more advanced manifest?** For more advanced scenarios, please see the [Azure manifest file documentation](https://learn.microsoft.com/en-us/entra/identity-platform/reference-app-manifest). ::: #### Configure users and groups in Microsoft Entra ID (optional) After the App Role(s) have been defined, users/groups from Microsoft Entra ID may be mapped into these Roles. 1. Under the App Registration, select your App registrations name under **Managed application in local directory**. :::figure ![Editing App registration users](/docs/img/security/authentication/images/aad-edit-app-registration-users.png) ::: 1. Choose **Assign users and groups** and select **Add user/group** to create a new role assignment. 2. Select the users you would like to assign roles to. Next, under **Select Role**, specify one of the AppRoles that you added to the App registration manifest. :::figure ![Editing App registration users role](/docs/img/security/authentication/images/aad-edit-app-registration-users-role.png) ::: 4. To save your changes, select the **Assign** button. :::div{.hint} If you have only one role, it will be automatically assigned. If you have multiple roles, a pop-up will appear when you click the **Assign** button so you can select the role to assign. ::: ## Configure Octopus Server To complete the Octopus configuration, you need three values from the Microsoft Entra ID configuration: the **Client ID**, **Client Secret**, and **Issuer**. ### Get the Client ID and Issuer In the Azure portal, you can see the **Application (client) ID** and **Directory (tenant) ID** on your App's Overview page. :::figure ![Getting the App registration](/docs/img/security/authentication/images/aad-get-app-registration-id.png) ::: ### Generate the Client secret In the Azure portal, navigate to the **Certificates & secrets** page and click **New client secret** to generate a new client secret for the App registration. :::figure ![Generating a client secret](/docs/img/security/authentication/images/aad-client-secret.png) ::: ### Setting the Client ID, Client secret and Issuer in Octopus Deploy :::div{.hint} Support for OAuth code flow with PKCE was introduced in **Octopus 2022.2.4498**. If you are using a version older than this, the **Client secret** setting is not required. ::: :::div{.hint} If Microsoft Entra ID is used to synchronize external groups with the 'group' role claim type and the user is a member of more than 200 Microsoft Entra ID groups, the client secret field is required. ::: To configure Octopus to use Microsoft Entra ID authentication, you'll need: - The **Client ID**, which should be a GUID. This is the **Application (client) ID** in the Azure App Registration Portal. - The **Client secret**, which should be a long string value. This is the **Value** of a client secret in the Azure App Registration Portal. - The **Issuer**, which should be a URL like `https://login.microsoftonline.com/GUID` where the `GUID` is a particular GUID identifying your Microsoft Entra ID tenant. This is the **Directory (tenant) ID** in the Azure App Registration Portal. When you have those values, run the following from a command prompt in the folder where you installed Octopus Server: ```powershell Octopus.Server.exe configure --azureADIsEnabled=true --azureADIssuer=Issuer --azureADClientId=ClientID --azureADClientSecret=ClientSecret # e.g. # Octopus.Server.exe configure --azureADIsEnabled=true --azureADIssuer=https://login.microsoftonline.com/12341234-xxxx-xxxx-xxxx-xxxxxxxxxxxx --azureADClientId=43214321-xxxx-xxxx-xxxx-xxxxxxxxxxxx --azureADClientSecret=bCeXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ``` Alternatively, these settings can be defined through the user interface by selecting **Configuration ➜ Settings ➜ Azure AD** and populating the fields **Issuer**, **ClientId**, **ClientSecret**, and **IsEnabled**. If you want to remove the ClientSecret you can use the delete button shown in the screenshot. :::figure ![Settings](/docs/img/security/authentication/images/aad-azure-ad-settings.png) ::: ### Assign app registration roles to Octopus teams (optional) If you followed the optional steps to modify the App registration's manifest to include new roles, you can assign them to **Teams** in the Octopus Portal. 1. Open the Octopus Portal and select **Configuration ➜ Teams**. 2. Either create a new **Team** or choose an existing one. 3. Under the **Members** section, select the option **Add External Group/Role**. ![Adding Octopus Teams from external providers](/docs/img/security/authentication/images/add-octopus-teams-external.png) 4. Enter the details from your App registration's manifest. In this example, we need to supply `octopusTesters` as the **Group/Role ID** and `OctopusTesters` as the **Display Name**. ![Add Octopus Teams Dialog](/docs/img/security/authentication/images/add-octopus-teams-external-dialog.png) 5. Save your changes by clicking the **Save** button. ### Octopus user accounts are still required Even if you are using an external identity provider, Octopus still requires a [user account](/docs/security/users-and-teams/), so that you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a **matching Identifier**. If there is no match, Octopus looks for a user account with a **matching Email Address**. If a user account is found, the external identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. :::div{.hint} **Existing Octopus user accounts** If you already have Octopus user accounts and you want to enable Microsoft Entra ID authentication: - Make sure the **Email Address** and **Username** values for Octopus user accounts are both identical. - Confirm the Octopus user **Email Address** and **Username** match the email address configured in Microsoft Entra ID. This will maximize the chance for your existing users to sign in using Microsoft Entra ID and prevent duplicate Octopus user accounts from being created. ::: ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Next steps Now that you're using an external identity provider, it is easy to increase your security. You could consider configuring [Multi-Factor Authentication](https://docs.microsoft.com/en-us/azure/multi-factor-authentication/multi-factor-authentication), after all, Octopus Deploy has access to your production environments! You should also consider disabling any authentication providers you aren't using, like username and password authentication. ## Troubleshooting If you are having difficulty configuring Octopus to authenticate with Microsoft Entra ID, check your [server logs](/docs/support/log-files) for warnings. ### Triple-check your configuration Unfortunately, the security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember, things are case-sensitive. - Remember to remove or add slash characters. ### Check the OpenID Connect metadata is working You can see the OpenID Connect metadata by going to the Issuer address in your browser and adding `/.well-known/openid-configuration` to the end. In our example, this would be something like `https://login.microsoftonline.com/b91ebf6a-84be-4c6f-97f3-32a1d0a11c8a/.well-known/openid-configuration`. ### Inspect the contents of the security token :::div{.hint} When using OAuth code flow with PKCE, the JWT token will not be visible to the end user. To use this debugging method, please remove the client secret and ensure that your configuration lines up with the pre-PKCE method, [detailed above](/docs/security/authentication/azure-ad-authentication#enable-id-tokens-and-configure). ::: Sometimes the contents of the security token sent back by Microsoft Entra ID aren't exactly what Octopus expects; certain claims might be missing or named differently. This will usually result in the Microsoft Entra ID user incorrectly mapping to a different Octopus user than expected. The best way to diagnose this is to inspect the JSON Web Token (JWT), which is sent from Microsoft Entra ID to Octopus via your browser. To inspect the contents of your security token: 1. Open your browser's Developer Tools and enable Network logging, making sure the network logging is preserved across requests. 2. In Chrome Dev Tools, this is called "Preserve Log". :::figure ![Preserve Logs](/docs/img/security/authentication/images/5866122.png) ::: 3. Attempt to sign into Octopus using Microsoft Entra ID and find the HTTP POST coming back to your Octopus instance from Microsoft Entra ID on a route like `/api/users/authenticatedToken/azureAD`. You should see an **id_token** field in the HTTP POST body. 4. Grab the contents of the **id_token** field and paste that into [https://jwt.io/](https://jwt.io/), which will decode the token for you. :::figure ![ID Token](/docs/img/security/authentication/images/5866123.png) ::: 5. Octopus uses most of the data to validate the token, but primarily uses the **sub**, **email**, and **name** claims. If these claims are not present, you will likely see unexpected behavior. ### EntraID Users with 200+ security groups :::div{.hint} If a user has more than 200 security groups assigned, we need to retrieve the user's security groups using the Graph API, which requires the `aio` claim to be present in the `id token` we send to the Graph API. If this claim is missing, check the following: - You don't have any wildcards `*` in the **Redirect URI**. - You have enabled `ID Tokens` in the **App Registration**. ::: ### Contact Octopus Support If you aren't able to resolve the authentication problems yourself using these troubleshooting tips, please reach out to our [support team](https://octopus.com/support) with: 1. The contents of your OpenID Connect Metadata or the link to download it (see above) can be different for each Azure AD App. 2. A copy of the decoded payload for some security tokens (see above) may work as expected, and others will not. 3. A screenshot of the Octopus User Accounts, including their username, email address, and name. # Microsoft Entra ID Source: https://octopus.com/docs/security/authentication/scim/configuring-microsoft-entra.md :::div{.hint} Support for Entra ID with SCIM is rolling out as an Early Access Preview to enterprise customers in Octopus Cloud. ::: Octopus Deploy supports [SCIM](./) 2 as a feature of the Azure AD authentication provider. This allows users and groups created and managed in Entra ID to be synchronized with users and teams in Octopus Deploy, rather than manually provisioning users or provisioning them just-in-time via **Allow Auto User Creation**. There are a few steps involved in this process, so we recommend confirming that each step works before proceeding with the next step. ## Requirements - [Entra ID authentication](/docs/security/authentication/azure-ad-authentication) configured and working - An Octopus Deploy license that includes the SCIM feature, such as an Enterprise license - Network access to allow inbound HTTPS API requests from Entra ID to Octopus Deploy. :::div{.info} If you're using Octopus Cloud, we don't recommend using [IP allow listing](/docs/octopus-cloud/ip-address-allow-list) with SCIM, as there are [a large number of IP addresses used by Entra ID](https://learn.microsoft.com/en-us/entra/identity/app-provisioning/use-scim-to-provision-users-and-groups#ip-ranges) and they can change regularly. ::: ## Configuring Octopus ### Review existing Octopus users The implementation of Entra ID authentication in Octopus Deploy matches against existing users by email address, so review your existing users in Octopus Deploy and make sure that their email addresses are up-to-date matching with the details recorded in Entra ID. If you skip this step, you may find that new users get created, instead adopting the existing users. :::div{.info} Once users have been linked to their Entra ID accounts via SCIM we recommend that no user edits are made within Octopus Deploy, as they'll be overridden by Entra ID when it next updates the user. SCIM makes Entra ID the source-of-truth for your users and teams, so make any edits in Entra ID itself. ::: ### Configure user authentication and enable SCIM support 1. Configure [Entra ID authentication](/docs/security/authentication/azure-ad-authentication) in Octopus Deploy. Don't configure any [external group mappings](/docs/security/authentication/azure-ad-authentication#mapping-microsoft-entra-id-users-into-octopus-teams-optional) as SCIM will allow Entra ID to create internal groups within Octopus Deploy. 1. Ensure that you can authenticate as an Entra ID user to Octopus Deploy, by using the **Sign in with Microsoft** button on the Octopus Deploy login page. 1. Navigate to **Configuration** -> **Settings** -> **Azure AD** in Octopus Deploy. 1. Change **Allow Auto User Creation** to `No` by unchecking the box. Users will be automatically created by Entra ID, so there's no need to create users on-the-fly when they login for the first time. 1. Change **Enable SCIM** to `Yes` by checking the box and clicking **Save**. This allows Octopus Deploy to receive SCIM requests, but we'll need to configure Entra ID to send them next. :::div{.hint} If you can't see an option for **Enable SCIM** in the **Azure AD** settings, check that you're running a recent version of Octopus Deploy and that you have an Enterprise license. If you have any questions about licensing, reach out to us at [sales@octopus.com](mailto:sales@octopus.com). ::: ### Configure Machine-to-Machine authentication Create a dedicated [Service Account](/docs/security/users-and-teams/service-accounts) for Entra ID SCIM, and add it to the **Octopus Managers** team, so that it has sufficient permissions. Once the **Service Account** has been created, create a new API key for the account ready to provide to Entra ID. :::figure ![Service account](/docs/img/security/authentication/scim/entraid/service-account.png) ::: :::div{.info} The Octopus Deploy account used by Entra ID needs to have Octopus permissions of the same level or greater than the permissions of the users that it is managing. This is required by Octopus Deploy to avoid escalation of privilege, for example to prevent a low-privilege user from creating an administrator account. ::: ## Configuring Entra ID ### Create new provisioning configuration 1. Open the **Azure Portal** and navigate to the **Enterprise Application** you created for Octopus Deploy when you configured Azure AD authentication for your instance. If you're on the **App Registration Overview** page, you can click the **Managed application in local directory** link to navigate to the matching **Enterprise Application**. 1. Navigate to the **Provisioning** section and click **New configuration** - For **Select authentication method** choose **Bearer authentication** - Enter the **Tenant URL** of `https://your-octopus-url/api/scim/v2/entraid/` (replacing `https://your-octopus-url` with the URL of your Octopus Server). - For **Secret token**, paste in the Octopus Deploy API key that you created for your Entra ID SCIM service account. :::figure ![New provisioning configuration](/docs/img/security/authentication/scim/entraid/new-provisioning-configuration.png) ::: 1. Click **Test connection** to get Entra ID to issue a request to the SCIM API :::div{.hint} If the test is not successful, double-check that: - you enabled SCIM in the Azure AD authentication provider settings and saved the changes - the Octopus Deploy URL has the `/api/scim/v2/entraid/` suffix on it - your Octopus Deploy instance is reachable from the Entra ID IP addresses - the Octopus Deploy API key is valid and has sufficient permissions ::: 1. Click **Create** to save the new provisioning configuration. ### Map group attributes 1. Within the new provisioning configuration, navigate to **Manage** -> **Attribute mapping**. 1. Click on **Provision Microsoft Entra ID Groups**, then under **Attribute Mappings** delete `externalId` so that just `displayName` and `members` are shown in the list, then click **Save**. :::figure ![Group Attribute Mapping](/docs/img/security/authentication/scim/entraid/attribute-mapping-groups.png) ::: ### Map user attributes 1. Within the new provisioning configuration, navigate to **Manage** -> **Attribute mapping**. 1. Click on **Provision Microsoft Entra ID Users**. There are a lot of attributes here that aren't relevant for Octopus Deploy, such as phone numbers, addresses, job titles, etc. Delete all of those, keeping only the following attributes: - `userName` - `active` - `displayName` - `emails[type eq "work"].value` - `externalId` 1. Edit `userName` to change the **Matching precedence** to `2`, then click **Save**. 1. Edit `externalId` to change the mapping to the source attribute of `objectId`. This is the identifier used by the Azure AD authentication provider, so using it for SCIM allows Octopus Deploy to easily find the same user. Change **Match objects using this attribute** to `Yes` and set **Matching precedence** to `1`, then click **Save**. 1. The completed user mappings should look like this: :::figure ![User Attribute Mapping](/docs/img/security/authentication/scim/entraid/attribute-mapping-users.png) ::: :::div{.warning} The SCIM functionality expects this specific user attribute mapping from Entra ID and may not work correctly if the configured attribute mapping differs. ::: ### Review settings and scope 1. Navigate back to the provisioning configuration page, then **Manage** -> **Provisioning**, then expand the **Settings** group and review the settings. You may wish to enable email notifications on failure or change the **Scope**. 1. If the scope is set to `Sync only assigned users and groups`, navigate to **Users and groups** to specify which users and groups should be provisioned in Octopus Deploy. ### Test the configuration 1. Within the new provisioning configuration, navigate to **Provision on demand**, then select a single Entra ID user to be provisioned in Octopus Deploy. Click **Provision**. The results page should show that the provisioning was successful. 1. Within Octopus Deploy, navigate to **Configuration** -> **Users** and you should see the newly provisioned user. If the user was an existing user, you should see that they now have an **Azure AD** login in their user profile. :::div{.warning} If there were any issues with provisioning the single user, review the Entra ID provisioning configuration before proceeding. It's much easier to fix any issues now, rather than when there are a large number of users and groups that have been incorrectly provisioned. ::: ### Start provisioning 1. Once you have verified that everything is correctly configured, navigate to the **Overview** page of the provisioning configuration. 1. Click **Start provisioning**. This will queue up a job in Entra ID to perform the initial sync, which may take a few minutes to start. :::div{.info} The default provisioning interval in Entra ID is 40 minutes, so any changes to users or groups may take up to this long to be applied to Octopus Deploy. ::: 1. Once the initial sync has completed, the status on the **Overview** page will update accordingly. :::figure ![Provisioning sync completed](/docs/img/security/authentication/scim/entraid/provisioning-sync-complete.png) ::: Entra ID will now reach out to Octopus Deploy regularly via the SCIM API whenever a user or group needs to be created, updated or deleted. ## Troubleshooting - Entra ID displays provisioning progress on the **Overview** page of the provisioning configuration. Provisioning can also be paused and restarted from this page, if you want to encourage Entra ID to try again. - Entra ID keeps detailed logs of all provisioning operations, accessible under **Monitor** -> **Provisioning logs**. Use the **Status** filter on this page to find any recorded failures. Please download the logs in JSON format and provide them to Octopus Support if you need any assistance. - You can review the actions taken within Octopus by looking at the **Audit Trail** for [the Entra ID service account](#configure-machine-to-machine-authentication). Navigate to **Configuration** -> **Users** and select the service account. Click the kebab menu in the top right, then click **Audit Trail** and check the box for **Include system events**. :::figure ![Service account audit trail](/docs/img/security/authentication/scim/entraid/service-account-audit-trail.png) ::: ## Known Limitations - SCIM does not provide a way for Octopus Deploy to push changes back into Entra ID. Avoid making any modifications in Octopus Deploy to any users or teams that are provisioned via SCIM, as these may get overwritten when Entra ID updates them in future. If you need to make changes, perform these changes in the source-of-truth which is Entra ID. - Octopus Deploy does not support nested groups. Any requests from Entra ID to add a group as a member of another group will be ignored. - Any groups provisioned by Entra ID will be global teams, rather than space-scoped teams, because Azure AD authentication applies to the whole Octopus Deploy instance. You can still apply space-scoped permissions to these teams, but the teams will be visible to all spaces. - Octopus Deploy only supports a single email address for each user, whereas Entra ID supports many. Octopus Deploy will ignore any email addresses other than the **Work** email address, ie: `emails[type eq "work"].value`. - Entra ID has a [Provisioning Expression builder](https://learn.microsoft.com/en-us/entra/identity/app-provisioning/expression-builder) feature in preview, which depends on APIs that are not yet implemented in Octopus Deploy. As such, you may see errors if you try to use the Provisioning Expression builder in the Azure Portal. # Hardening Octopus Source: https://octopus.com/docs/security/hardening-octopus.md We pride ourselves on making Octopus Deploy a secure product. If you are hosting the Octopus Server yourself, you are responsible for the security and integrity of your Octopus installation. This guide will help you harden your network, host operating system, and the Octopus Server itself. :::div{.hint} Have you heard about [Octopus Cloud](https://octopus.com/cloud)? We take care of hosting your Octopus Server for you so you can get on with the job of deploying and managing your applications. ::: ## Before you begin Octopus Deploy is a complex system with many security features baked in and tuned by default. Take some time to understand what we've built in to the product, and what you are ultimately taking responsibility for when self-hosting Octopus Deploy. Learn about [security in Octopus Deploy](/docs/security). Reading this guide carefully before you begin will help you prepare all the secure networking and server infrastructure you need for your Octopus installation. If you need any help along the way, don't hesitate to [get in touch](https://octopus.com/support)! Depending on your scenario you may want to relax or ignore these recommendations. ### Familiarize yourself with Octopus Server If you consider networking, the host operating system, Microsoft SQL Server, and Octopus Server: it is very likely Octopus Server is the new kid on the block. You should consider downloading a free trial of Octopus Server and setting it up on your local machine so you are familiar with how it works. This will eliminate some potential surprises as you progress through the security hardening. Learn about [getting started with Octopus Deploy](/docs/getting-started). ### Choose your order for hardening Depending on your familiarity with Octopus Server, or SQL Server, or networking, or your host operating system, you should consider the order in which you perform the hardening. For example, if you are unfamiliar with Octopus Server, perhaps you should start there, getting your server up and running and working as you'd expect, then move on to the host operating system, the SQL Server, and finally your networking. ## Harden your Octopus Server 1. Upgrade to the latest version. 1. Securely expose your Octopus Server to your users, infrastructure, and external services 1. Use HTTPS over SSL. 1. Configure HTTP security. 1. Configure your workers. 1. Configure the way Octopus Server communicates with deployment targets. ### Upgrade to the latest version Generally speaking, the latest available version of Octopus Server will be the most secure. You should consider a strategy for keeping Octopus Server updated. We follow a [responsible disclosure policy](#disclosure-policy) so it is possible for you to be aware of any known issues which affect the security and integrity of your Octopus Server. ### Securely expose your Octopus Server For Octopus Server to be useful you need to expose its HTTP API to your users, and perhaps your infrastructure and some external services. There are many different approaches to solving this problem, but at its core you will want to: 1. Use HTTPS over SSL. Learn about [safely exposing your Octopus Server](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https). 1. Configure the built in HTTP security features as appropriate for your scenario. Learn about [HTTP security headers](/docs/security/http-security-headers). ### Configure your Workers \{#configuring-workers} Workers offer a convenient way to run scripts and certain deployment steps. Learn about [workers](/docs/infrastructure/workers). We highly recommend configuring external workers running on a different host to your Octopus Server. This is the easiest and more secure approach to prevent user-provided scripts from doing harm to your Octopus Server. Learn about the [built-in worker](/docs/infrastructure/workers/built-in-worker). Learn about [external workers](/docs/infrastructure/workers). ### Configure how Octopus Server communicates with deployment targets Octopus Server always uses a secure and tamper-proof communications transport for communicating with deployment targets: - Learn about [Octopus Server to Tentacle communication](/docs/security/octopus-tentacle-communication). - Learn about [Octopus Server to SSH communication](/docs/infrastructure/deployment-targets/linux/ssh-target). The decisions you need to make are: 1. Which kind of deployment targets do you want to allow? Listening Tentacles? Polling Tentacles? SSH? This will have an impact on how you configure your network. See [harden your network](#harden-your-network). 1. Do you want to use a proxy server? Learn about [proxy support in Octopus Deploy](/docs/infrastructure/deployment-targets/proxy-support). ## Harden your host operating system These steps apply to the host operating system for your Octopus Server. You may want to consider similar hardening for your [deployment targets](/docs/infrastructure/) and any [workers](/docs/infrastructure/workers). 1. Rename local administrator account. 1. Configure malware protection. 1. Disable weak TLS protocols. 1. Prevent user-provided scripts from doing harm. a. Run workers under a different security context. a. Prevent unwanted file access. a. Prevent unwanted file execution. a. Prevent creating scheduled tasks. 1. Configure your operating system firewall - see [harden your network](#harden-your-network). ### Rename local administrator accounts It might seem really simple, but by renaming your `Administrator` account to anything else makes it that much harder for attackers to use this attack vector in to your Octopus Server. Here is an example script to rename the built-in `Administrator` account in Windows. ```powershell Write-Output "Ensure local Administrator account renamed..." $user = Get-LocalUser -Name Administrator -ErrorAction SilentlyContinue if($user) { Write-Output "Renaming local 'Administrator' account to 'Bob'..." Rename-LocalUser -Name Administrator -NewName Bob } else { Write-Output "The local 'Administrator' account has already been renamed." } ``` ### Configure malware protection Depending on your host operating system, and your requirements for malware protection, you may want to install and configure a specific application. At the very least, Windows Defender is a very good starting place on modern Windows operating systems. Here is an example script for configuring Windows Defender to exclude the Octopus work folders, and to automatically download new definitions. **Note:** you may need to change the excluded folders/files if you install Octopus Server or Tentacle into a different location. ```powershell # Install and Configure: https://docs.microsoft.com/en-us/windows/threat-protection/windows-defender-antivirus/windows-defender-antivirus-on-windows-server-2016 # Automatic Exclusions: https://docs.microsoft.com/en-us/windows/threat-protection/windows-defender-antivirus/configure-server-exclusions-windows-defender-antivirus # Configure Custom Exclusions: https://docs.microsoft.com/en-us/windows/threat-protection/windows-defender-antivirus/configure-extension-file-exclusions-windows-defender-antivirus Write-Output "Installing Windows Defender..." Install-WindowsFeature -Name "Windows-Defender" Write-Output "Setting Windows Update to 'Download updates but let me choose whether to install them'." Write-Output "This value allows Windows Defender to download and install definition updates automatically, but other updates are not automatically installed." cscript C:\Windows\System32\Scregedit.wsf /AU 3 Write-Output "Excluding the Tools folder for Octopus Server (e.g. Calamari) from Windows Defender..." Add-MpPreference -ExclusionPath "C:\Octopus\OctopusServer\Tools" Add-MpPreference -ExclusionPath "C:\Octopus\OctopusServer\Tools*" Write-Output "Excluding the Tools folder for Octopus Tentacles/Workers (e.g. Calamari) from Windows Defender..." Add-MpPreference -ExclusionPath "C:\Octopus\Tools" Add-MpPreference -ExclusionPath "C:\Octopus\Tools\*" Write-Output "Excluding Octopus Work folder from Windows Defender..." Add-MpPreference -ExclusionPath "C:\Octopus\Work" Add-MpPreference -ExclusionPath "C:\Octopus\Work\*" ``` ### Disable weak TLS protocols \{#disable-weak-tls-protocols} All communication between Octopus Server and Tentacles is performed over a secure ([TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security)) connection. Since both Server and Tentacle rely on the host OS for the available TLS version to use when establishing a secure TLS connection when communicating, you can harden the available TLS implementation. :::div{.warning} Every installation is different and the examples provided here are only intended to demonstrate functionality. Ensure you are complying with your company's security policies when you configure any infrastructure and that your specific implementation matches your needs. ::: #### Disable SSLv3, TLS 1.0 and 1.1 on Windows \{#disable-weak-tls-protocols-windows} On Windows, the easiest way to disable weak versions of SSL and TLS are by using a tool like [IISCrypto](https://www.nartac.com/Products/IISCrypto) to change the Windows Registry. :::div{.problem} **Take care editing registry entries** Editing the Windows registry can have serious implications. Please make sure you understand and are comfortable with the potential risks. Remember to always [backup any keys](https://support.microsoft.com/en-us/topic/how-to-back-up-and-restore-the-registry-in-windows-855140ad-e318-2a13-2829-d428a2ab0692) before they are modified. If you have any questions or need assistance, please [contact us](https://octopus.com/support). ::: If you prefer, you can also script it too. This can be useful when automating Server installation. The following example PowerShell script will disable `SSLv3`, `TLSv1` and `TLSv1.1`: ```powershell # SSLv3 New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server" -name "Enabled" -value "0" -PropertyType "DWord" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server" -name "DisabledByDefault" -value 1 -PropertyType "DWord" -Force | Out-Null New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client" -name "Enabled" -value "0" -PropertyType "DWord" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client" -name "DisabledByDefault" -value 1 -PropertyType "DWord" -Force | Out-Null # TLSv1.0 Write-Output "Disable TLS 1.0" New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server" -name "Enabled" -value "0" -PropertyType "DWord" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server" -name "DisabledByDefault" -value 1 -PropertyType "DWord" -Force | Out-Null New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client" -name "Enabled" -value "0" -PropertyType "DWord" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client" -name "DisabledByDefault" -value 1 -PropertyType "DWord" -Force | Out-Null # TLSv1.1 Write-Output "Disable TLS 1.1" New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -name "Enabled" -value "0" -PropertyType "DWord" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -name "DisabledByDefault" -value 1 -PropertyType "DWord" -Force | Out-Null New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -name "Enabled" -value "0" -PropertyType "DWord" -Force | Out-Null New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -name "DisabledByDefault" -value 1 -PropertyType "DWord" -Force | Out-Null ``` :::div{.hint} Once the TLS versions are disabled, reboot your Server and importantly [verify the change was successful](#disable-weak-tls-protocols-verify). ::: #### Disable SSLv3, TLS 1.0 and 1.1 on Ubuntu Server \{#disable-weak-tls-protocols-ubuntu} On Ubuntu `20.04` using OpenSSL `1.1.1f` (the latest at time of writing), you can specify the minimum TLS version to use to be `TLSv1.2` by setting the `MinProtocol` directive in the `/etc/ssl/openssl.cnf` OpenSSL config file: ``` [system_default_sect] MinProtocol = TLSv1.2 ``` On Ubuntu `18.04`, if the `MinProtocol` directive doesn't work, you can try this alternative. When using OpenSSL `1.1.1` (the latest at time of writing), you can specify the available TLS Protocols explicitly in the `/etc/ssl/openssl.cnf` OpenSSL config file: ``` [system_default_sect] Protocol = -SSLv3, -TLSv1, -TLSv1.1, TLSv1.2 ``` :::div{.hint} Once the version of TLS is set in your config, you'll want to restart any Tentacle service, and importantly [verify the change was successful](#disable-weak-tls-protocols-verify). ::: #### Verification of disabling weak TLS protocols \{#disable-weak-tls-protocols-verify} Once you have performed changes to the available versions of TLS, you should verify that they have been disabled successfully. Tools such as [openssl](https://www.openssl.org/) and [nmap](https://nmap.org/), and websites like [Qualys SSL Labs](https://www.ssllabs.com/ssltest/) can be used to verify the TLS version and cipher suites available. ### Prevent user-provided scripts from doing harm These steps only apply if you are running either the built-in worker or an external worker on the same host operating system as the Octopus Server itself. You should prevent custom scripts executed by these workers from doing harm to your Octopus Server. :::div{.hint} Consider using an [external worker](/docs/infrastructure/workers) and moving this workload to a different server. This is the very best way to prevent any potential for harm to your Octopus Server, and you won't need to rely on the rest of these steps to prevent harm to your Octopus Server. ::: #### Run as a different user Applies to: `Built-in worker` and `External worker` running on the Octopus Server The first step is to make the worker run under a different security context to the Octopus Server. This enables you to make the distinction between what the Octopus Server should be able to do, versus what the worker should be able to do. See [configuring workers](#configuring-workers). #### Prevent unwanted file access Applies to: `Built-in worker` and `External worker` running on the Octopus Server Here is an example script preventing the worker from accessing the Octopus Server configuration which contains sensitive information. **Note:** In your scenario you may need to use a different username and/or Octopus Home folder path. ```powershell $username = "svcWorker" $octopusHome = "C:\Octopus" $acl = Get-Acl -Path $octopusHome $acl.SetAccessRule(New-Object System.Security.AccessControl.FileSystemAccessRule("$username","FullControl","Deny")) Set-Acl -Path $octopusHome -AclObject $acl ``` If you are using an external worker, that's all you need to do. However if you are using the built-in worker you should allow access to its `Work` directory which is located under the Octopus Home directory. ```powershell $workDirectory = Join-Path $octopusHome "OctopusServer\Work" $acl = Get-Acl -Path $workDirectory $acl.SetAccessRule(New-Object System.Security.AccessControl.FileSystemAccessRule("$username","FullControl","Allow")) Set-Acl -Path $workDirectory -AclObject $acl ``` ### Prevent unwanted execution Applies to: `Built-in worker` and `External worker` running on the Octopus Server Here is an example script for preventing execution of certain Windows executables which could be used by an attacker to learn information about your network. ```powershell $username = "svcOctopus" $executables = @("C:\Windows\System32\NETSTAT.EXE", "C:\Windows\System32\ROUTE.EXE", "C:\Windows\System32\NETSH.EXE") $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("$username","Read","Deny") foreach ($executable in $executables) { $acl = Get-Acl -Path $executable $acl.SetAccessRule($rule) Write-Output "Denying read access to $executable for $username..." Set-Acl -Path $executable -AclObject $acl } ``` ### Prevent creating scheduled tasks or chron jobs Applies to: `Built-in worker` and `External worker` running on the Octopus Server Attackers could potentially create a scheduled task or chron job to run as a privileged user account. Here is an example script to prevent members of the `Authenticated Users` group from creating Scheduled Tasks in Windows. ```powershell Write-Output "Prevent users from creating scheduled tasks..." # /grant:r replace existing permissions # S-1-5-11 "Authenticated Users" # (CI) "Container Inherit" # (Rc) "Read Permissions" & "$env:SystemRoot\System32\icacls.exe" "$env:SystemRoot\System32\Tasks\" "/grant:r" "*S-1-5-11:(CI)(Rc)" ``` ## Harden your SQL Server You don't need to do very much here specific to Octopus Server. 1. Use Integrated Security for the database connection if possible, otherwise use a strong password. 1. Prevent the Octopus Server's Server Principal (Login) from doing harm outside its own database. Learn about managing your [Octopus SQL database](/docs/installation/sql-server-database). ## Harden your network \{#harden-your-network} Your Octopus Server is very similar to any other secure web server: to do anything useful you need to allow certain network traffic in/out. Depending on your scenario you will need to apply these rules at several levels including: - network control infrastructure - the firewall in the host operating system The TCP ports listed below are defaults, and can be changed if required - refer to the relevant documentation if you need to change from these default ports. ### Inbound rules |Name|Type|Source|Target|Allow/Deny|Description| |---|---|---|---|---|---| |HTTP|`TCP 80`|Users|Octopus Server|ALLOW|We recommend only using HTTPS over SSL, however it can be convenient to allow HTTP for the initial connection which is then forced to HTTPS over SSL.| |HTTPS|`TCP 443`|Users, Polling Tentacles, external services|Octopus Server|ALLOW|Required for HTTPS over SSL. Also required if using [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) over [Web Sockets](/docs/infrastructure/deployment-targets/tentacle/windows/polling-tentacles-web-sockets).| |Polling Tentacle|`TCP 10943`|Polling Tentacles|Octopus Server|ALLOW|Required when using [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) via TCP as deployment targets or external workers.| |SSH|`TCP 22`|Octopus Server|SSH deployment targets|ALLOW|Allows Octopus Server to securely connect to any SSH deployment targets.| |RDP|`TCP 3389`|Remote Desktop Users|Octopus Server|ALLOW|Allows your system administrators to perform maintenance tasks on your Octopus Server.| |All inbound|`ALL`|Anywhere|Octopus Server|DENY|Prevent any other unwanted inbound traffic.| ### Outbound rules |Name|Type|Source|Target|Allow/Deny|Description| |---|---|---|---|---|---| |Listening Tentacle|`TCP 10933`|Octopus Server|Listening Tentacles|ALLOW|Required when using [Listening Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended) as deployment targets or external workers.| |MS SQL|`TCP 1433`|Octopus Server|SQL Server|ALLOW|Allows Octopus Server to connect to its SQL Server Database.| |MS SQL|`UDP 1434`|Octopus Server|SQL Server|ALLOW|Allows Octopus Server to connect to a named instance using the SQL Server Browser Service. This may not be required in your configuration.| |SMB|`TCP 445`|Octopus Server|Anywhere|DENY|Prevents attackers from spreading malware via known SMB vulnerabilities.| |RDP|`TCP 3389`|Octopus Server|Anywhere|DENY|Prevents attackers from using the Octopus Server as a beachhead into your network via RDP.| |WinRM-HTTP|`TCP 5985`|Octopus Server|Anywhere|DENY|Prevents attackers from using the Octopus Server as a beachhead into your network via unsecured WinRM.| |All outbound|`ALL`|Octopus Server|Anywhere|???|Depends on how fine-grained control you want over what your Octopus Server can do. It also depends on where your workers are, and where you are deploying to. Allowing all outbound traffic is a good place to start, then perform network analysis to decide on your next step.| ## Harden your containers If you run an [Octopus Deploy container](/docs/installation/octopus-server-linux-container), in addition to your usual security measure for running apps out of containers, take the following steps to secure it: - Move your Docker data directory (the default location is `/var/lib/docker`) so that your containers are stored on a separate partition. - Assign resources carefully: - Consider pinning CPUs to namespaces in order to give them a boundary. - Consider the amount of memory required, if you assign too much the container is susceptible to denial of service attacks, but if you assign too little or make use of memory ballooning performance will be impacted. - Consider which containers reside in each network namespace as all processes in a namespace can talk to the namespace interface. The security of your Linux container host and its Docker configuration can be analyzed in detail by using [Docker Bench for Security](https://github.com/docker/docker-bench-security) from the [Center for Internet Security](https://www.cisecurity.org/about-us/). For more generalized advice for your platform they provide their benchmarks as [PDF documents](https://www.cisecurity.org/benchmark/docker/). ## Samples We have an [Octopus Admin](https://oc.to/OctopusAdminSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at some examples of how we have used Octopus for hardening tasks. ## Getting help We are more than happy to help if you are having trouble with self-hosting Octopus Deploy, or are concerned about the security or integrity of your Octopus installation. Don't hesitate to [get in touch](https://octopus.com/support)! # Teams with mixed environment privileges in Octopus Source: https://octopus.com/docs/security/users-and-teams/creating-teams-for-a-user-with-mixed-environment-privileges.md A common scenario some users may face, is the desire to provide full access to one environment, but only read access to the next stage. For example, the developers might be able to fully manage deployments to the development and staging environments, but only view the production deployments. ## Creating teams for users with mixed environment privileges {#MixedEnvironmentPrivileges} ### Creating the developers team {#CreatingDevelopersTeam} Start by clicking the **Teams** tab under **Configuration** in the Octopus Web Portal. Then click **Add team**. :::figure ![](/docs/img/security/users-and-teams/images/add-team.png) ::: When you create the team, it is possible to change the visibility of the team to either: - Visible only within the space we are in. - Visible to all spaces. For this example, we'll choose this team to only be visible in the space we are currently in. ![](/docs/img/security/users-and-teams/images/add-team-detail.png) project viewer role for all environments Give the team an appropriate name like *Developers* and click **Save**. ### Add the Project viewer role We can now add the **Project viewer** role to all environments by clicking **Include user role** from the **User Roles** tab. This role provides read only access to deployment processes and releases. Because we will not provide any scoping for this role - this role will form the baseline permissions for this team in any scope. :::figure ![](/docs/img/security/users-and-teams/images/add-unscoped-role.png) ::: ### Adding additional roles for a subset of environments Since our goal is to give members of the Developers team the ability to create and deploy releases _in the Development and Staging environments only_, we can click **Include user role** again, this time adding the **Project lead** role. This role provides all the permissions of the **Project viewer** role as well as allowing a team member to create and deploy releases. This time, we will click on **Define Scope** and choose the environments that we would like to scope the role to, before hitting the **Apply** button. :::figure ![](/docs/img/security/users-and-teams/images/define-scope-for-user-role.png) ::: We can repeat this process as many times as necessary to configure the team to your needs. The resulting team configuration screen should now display all different roles and their scopes so that you can review them. :::figure ![](/docs/img/security/users-and-teams/images/add-team-with-scoped-roles.png) ::: When you are happy with these changes hit **Save** to make them effective. ## Summary {#Summary} The permissions system in Octopus Deploy provides a very flexible way of defining broad access to system functionality, while still allowing it to be constrained to very specific environments or projects. In this guide we have seen how a developer can have their permissions configured so they can have full access to the first few stages of the deployment lifecycle, while restricting access to the business critical production areas. # Multi-tenant regions Source: https://octopus.com/docs/tenants/guides/multi-tenant-region.md :::div{.info} You can find the example project in this guide on our [samples instance](https://samples.octopus.app/app#/Spaces-682/projects/car-rental). ::: This guide introduces the concept of using geographic locations as tenants for an application as well as different upgrade rings. In this guide, we are using a fictitious Car Rental company that has three locations: Los Angeles International Airport (LAX), Des Moines Iowa, and Norfolk Virginia. In this scenario, the Des Moines location is used as a pilot facility, with Norfolk testing beta features. LAX is their busiest location so it uses only stable releases. The Car Rental company uses Azure to host the application for its stores. To minimize latency, the application is deployed to the closest Azure datacenter known as regions; LAX uses `West US`, Des Moines uses `Central US`, and the Norfolk location uses `East US`. The following resources have been pre-configured in Octopus: * Four environments: Development, Test, Staging and Production. ## Creating tenant tags Each store is hosted in a different region and plays a different role in the development lifecycle by participating in different upgrade rings. To designate which tenant (store) is in which region and upgrade ring, we define [tenant tag sets](/docs/tenants/tenant-tags). For this scenario, we'll need two tenant tag sets - one for the region, and one for the upgrade ring. To create a tenant tag set, navigate to **Deploy ➜ Tenant Tag Sets ➜ Add Tag Set**. Give the first tag set the name **Azure Region**, and add a tag for each of the regions - **West US**, **Central US** and **East US**. Give the second tag set the name **Release Ring**, and add a tag for each of the upgrade rings - **Alpha**, **Beta** and **Stable**. ## Creating tenants The Car Rental company has three stores which have access to the following environments: - Des Moines (Development, Test, Staging, Production) - Norfolk (Test, Staging, Production) - LA International Airport (Staging, Production) Each store is modeled as a tenant. Since Des Moines is the pilot facility it has access to all environments, while LAX will only receive more stable releases. See [tenant creation](/docs/tenants/tenant-creation) for how to create your tenants. Once you've created the tenants, you'll need to [connect them to a project and environment(s)](/docs/tenants/tenant-creation/connecting-projects). You may find it easier to connect tenants from the Car Rental project, which [allows you to connect multiple tenants at once](/docs/projects/tenants/bulk-connection). We'll also need to associate the tags we created earlier to each tenant. In the tenant overview, click on **Manage Tags** and give each tenant the following tags: - Des Moines - Azure Region: `Central US` - Release Ring: `Alpha`, `Beta`, and `Stable` - Norfolk - Azure Region: `East US` - Release Ring: `Beta` and `Stable` - LA International Airport - Azure Region: `West US` - Release Ring: `Stable` ## Creating infrastructure The Car Rental applications consist of a PHP web UI and a MySQL database backend. To support this, an Azure App Service and MySQL database server are provisioned in each Azure region. Using [workers](/docs/infrastructure/workers), it's not necessary to configure the database server as a deployment target and is considered best practice not to do so. For the Azure App Service, see [creating an Azure Web App deployment target](/docs/infrastructure/deployment-targets/azure/web-app-targets#creating-web-app-targets) for how to create your deployment targets. After you've added each deployment target, ensure the target is associated with its respective tenant and tags by updating the **Tenanted Deployments** and **Associated Tenants** sections. For example, to configure the target which will deploy the Alpha version of the application to Des Moines, use the following associations: :::figure ![](/docs/img/tenants/guides/multi-tenant-region/images/tenant-demoines-tenanted-alpha-tag.png) ::: ### Example automation script Car Rental has plans on expanding in the future. Rather than having to run through the above steps to configure a tenanted target, they've automated the creation of region infrastructure using the [Octopus REST API](/docs/octopus-rest-api). This script automates the above procedure of configuring the target as tenanted and assigning it to the appropriate tenant. :::div{.success} The entire runbook process can be found on our [Octopus samples instance](https://samples.octopus.app/app#/Spaces-682/projects/car-rental/operations/runbooks/Runbooks-1361/overview) ::: ```powershell # Define parameters $baseUrl = $OctopusParameters['Global.Base.Url'] $apiKey = $OctopusParameters['Global.Api.Key'] $spaceId = $OctopusParameters['Octopus.Space.Id'] $spaceName = $OctopusParameters['Octopus.Space.Name'] $environmentName = $OctopusParameters['Octopus.Environment.Name'] $environmentId = $OctopusParameters['Octopus.Environment.Id'] $azureAccount = $OctopusParameters['Azure.Account'] $name = "#{Octopus.Deployment.Tenant.Name | Replace " "}-#{Octopus.Environment.Name}-AppService" $resourceGroupName = "OctopusSamples-$($OctopusParameters["Octopus.Space.Name"].Replace(' ', ''))-$($OctopusParameters["Octopus.Deployment.Tenant.Name"].Replace(' ', ''))-$($OctopusParameters["Octopus.Environment.Name"])-rg" # Get default machine policy $machinePolicy = (Invoke-RestMethod -Method Get -Uri "$baseUrl/api/$spaceId/machinepolicies/all" -Headers @{"X-Octopus-ApiKey"="$apiKey"}) | Where-Object {$_.Name -eq "Default Machine Policy"} # Build JSON payload $jsonPayload = @{ Id = $null MachinePolicyId = $machinePolicy.Id Name = $name IsDisabled = $false HealthStatus = "Unknown" HasLatestCalamari = $true StatusSummary = $null IsInProcess = $true EndPoint = @{ Id = $null CommunicationStyle = "AzureWebApp" Links = $null AccountId = $azureAccount ResourceGroupName = $resourceGroupName WebAppName = $name } Links = $null TenantedDeploymentParticipation = "Tenanted" Roles = @( "CarRental-Web" ) EnvironmentIds = @( $environmentId ) TenantIds = @("$($OctopusParameters['Octopus.Deployment.Tenant.Id'])") TenantTags = @() } ($jsonPayload | ConvertTo-Json -Depth 10) # Register the target to Octopus Deploy Invoke-RestMethod -Method Post -Uri "$baseUrl/api/$spaceId/machines" -Headers @{"X-Octopus-ApiKey"="$apiKey"} -Body ($jsonPayload | ConvertTo-Json -Depth 10) ``` ## Region-specific workers The SecOps team at Car Rental have a policy that when a deployment occurs, the infrastructure used must reside within the same region datacenter. Database deployments for Car Rental are handled by [workers](/docs/infrastructure/workers), so the deployment process needs to automatically select the correct worker during a deployment. ### Region worker pools To accommodate the policy, you can create worker pools for each Azure region and create a worker in each one. :::figure ![Region worker pools](/docs/img/tenants/guides/multi-tenant-region/images/region-worker-pools.png) ::: ### Worker pool variable Region-specific worker pools are only half of the equation; the deployment still needs to be configured to select the correct pool based on the tenant being deployed to. To solve this issue, you can use a [worker pool variable](/docs/projects/variables/worker-pool-variables). Just like other variables, these variables can be scoped to tenant tags. :::figure ![Worker pool variables](/docs/img/tenants/guides/multi-tenant-region/images/worker-pool-variables.png) ::: ### Configure steps to use a worker pool variable The *MySQL - Create Database If Not Exists* step of the Car Rental deployment process is configured to run on a worker and use the `Project.Worker.Pool` variable :::figure ![](/docs/img/tenants/guides/multi-tenant-region/images/car-rental-mysql-step.png) ::: Because the tenants for the Car Rental application have been assigned their appropriate Azure Region tag, Octopus Deploy will automatically select the correct worker when performing a deployment to the tenant. ## Deploying to a release ring The developers for Car Rental have finished some work on a new feature and are ready to test it out in stores. Let's follow the release that was created as it's deployed to tenants in the `Beta` release ring. Deploying a multi-tenanted application follows the same process as any other application. The one difference is that when choosing where to deploy the release, you'll also need to choose the tenants or tags to deploy the release to. To deploy to all tenants that participate in the Beta release ring, you'll want to select the **Beta** tag from the **Release Ring** tag set. If you deploy to the Development environment, you'll notice that only Des Moines will be deployed to, as this is the only tenant available in the environment with the **Beta** tag. Promoting the same release to Test with the **Beta** tag selected again will result in Des Moines and Norfolk being chosen for deployment. :::figure ![Beta release ring test deployment](/docs/img/tenants/guides/multi-tenant-region/images/beta-release-ring-test-deployment.png) ::: Because we assigned the infrastructure to their respective tenants, Octopus Deploy already knows what targets to deploy to. Deploying to Staging and Production would yield the same results as Test as `Des Moines` and `Norfolk` are the only two locations who are participating in the `Beta` tag. # Multi-tenant SaaS applications in Octopus Source: https://octopus.com/docs/tenants/guides/multi-tenant-saas-application.md :::div{.info} You can find the example project in this guide on our [samples instance](https://samples.octopus.app/app#/Spaces-682/projects/vet-clinic). ::: This guide will introduce you to Software as a Service (SaaS) multi-tenant deployments in Octopus. In this guide, we will be deploying an application called **Vet Clinic**. When a customer signs up to Vet Clinic, they get their own [Azure Web App](/docs/infrastructure/deployment-targets/azure/web-app-targets) and database for staging and production, choosing which region the application and data are hosted in. Testing is completed internally in development and test, then customers have their instance of the application deployed *optionally* to staging and finally onto production. In addition, customers can choose to take advantage of custom features, such as custom branding, on their instance of the Vet Clinic application. The following resources have been pre-configured in Octopus: * Four environments: Development, Test, Staging and Production. * The guide deploys to Azure Web Apps. These have already been pre-configured in Azure. To create some Azure resources you can follow [this](/docs/runbooks/runbook-examples/azure/provision-app-service/ ) runbook guide to set up Azure Web App Services for each of the environments. ## Creating a lifecycle The first step in this guide is to [create a new lifecycle](/docs/releases/lifecycles#create-a-new-lifecycle) for our project. Give the lifecycle a name, an optional description, and four phases. The lifecycle should ensure all releases are deployed to Development, Test, *optionally* to Staging, then lastly into Production. ## Creating the project From the Projects page, click **Add Project** and create a project with the name **Vet Clinic**. In the **Project Settings** of your newly created project, ensure tenanted deployments are enabled by setting the **Multi-tenant Deployments** option to either *Allow deployments with or without a tenant*, or *Require a tenant for all deployments*. ## Creating tenant tags Customers who use Vet Clinic can choose to apply custom branding to the application. To designate which tenant (customer) has custom branding applied we define [tenant tag sets](/docs/tenants/tenant-tags). For this scenario, we need a single tenant tag set for the custom branding. To create a tenant tag set, navigate to **Deploy ➜ Tenant Tag Sets ➜ Add Tag Set**. Give the tag set the name **Custom Features**, and add a tag called **Branding**. ## Creating tenants Vet Clinic has four customers; one internal customer used for development and testing, and three external customers - VetClinic Internal - Capital City Pet Hospital - Your Companion Vets - Valley Veterinary Clinic Each customer is modeled as a tenant and has two environments they deploy to. The internal tenant is used to deploy new releases to development and test before they are promoted to the other tenants, who deploy to staging and production. See [tenant creation](/docs/tenants/tenant-creation) for how to create your tenants. Once you've created the tenants, you'll need to [connect them to a project and environment(s)](/docs/tenants/tenant-creation/connecting-projects). You may find it easier to connect tenants from the Vet Clinic project, which [allows you to connect multiple tenants at once](/docs/projects/tenants/bulk-connection). For the internal tenant, we'll only need to be able to deploy Vet Clinic to the development and test environments. The customer tenants will need staging and production, but not development and test. Each customer has the option of applying custom branding. To ensure the deployment process only runs this step for specific tenants, we must associate each tenant with the correct tag. In the tenant overview, click on **Manage Tags** and select the branding tag for each tenant. Repeat this process for each of the tenants. ## Creating tenant variables Each customer has their own database for every environment with a unique name. To manage this, we can create a [project template variable](/docs/projects/variables/tenant-variables#project-templates) for the database name. Add a project template with the following properties: - **Name:** Tenant.Database.Name - **Label:** Database Name - **Help text:** Name of tenant database for Vet Clinic - **Control type:** Single-line text box Once the template variable is added you'll be able to provide variable values for each tenant and environment combination. ## Creating infrastructure All external customers have a production environment, with some having the optional staging environment. We only need to add a deployment target for each environment our tenant (customer) deploys to. Since our application is hosted on Azure Web Apps, we need to [add an Azure Web App deployment target](/docs/infrastructure/deployment-targets/azure/web-app-targets#creating-web-app-targets). After you've added each deployment target, ensure the target is associated with a tenant by updating the **Tenanted Deployments** and **Associated Tenants** sections. ## Creating the deployment process You can reference the [deployment process of the Vet Clinic project on our samples instance](https://samples.octopus.app/app#/Spaces-682/projects/vet-clinic/deployments/process). Note that the **Apply Custom Branding** step will only run for tenants that have the **Branding** tag associated with them. ## Creating and deploying a release Deploying a multi-tenanted application follows the same process as any other application. The one difference is that when choosing where to deploy the release, you'll also need to choose the tenants or tags to deploy the release to. :::figure ![](/docs/img/tenants/guides/multi-tenant-saas-application/images/multi-tenanted-dashboard.png) ::: Tenants will only be eligible for deployment to the environments they've been connected to. Above is an example of what a multi-tenanted application's dashboard may look like. One of our tenants **Capital City Pet Hospital** has the **Branding** tenant tag associated with them. As a result, the **Apply Custom Branding** step was applicable to them and we can see from the logs that it ran that step when we deployed to production for **Capital City Pet Hospital**. :::figure ![](/docs/img/tenants/guides/multi-tenant-saas-application/images/deploying-release-production.png) ::: # Multi-tenant teams Source: https://octopus.com/docs/tenants/guides/multi-tenant-teams.md This guide demonstrates using the tenant feature to support multiple teams developing the same application. The Octo Pet Shop application supports multiple development teams. Each team has dedicated infrastructure so the application can be individually tested before submitting to QA for verification, deployed to staging, and finally production. In this scenario, we have a total of three teams, each configured as a tenant: - Team Avengers - Team Radical - QA The development teams have the ability to create and deploy releases to their specific tenant to the Development environment only. QA is able to deploy to the QA tenant in Test, and Operations are able to deploy to Staging and Production. Operations, in this case, is not a tenant so the Staging and Production environments are untenanted. Get Started ## Guide contents The following sections make up the guide: - [Creating new tenants](/docs/tenants/guides/multi-tenant-teams/creating-new-tenants) - [Assigning a team to a tenant](/docs/tenants/guides/multi-tenant-teams/assign-team-userrole-to-tenant) - [Deploying to a team tenant](/docs/tenants/guides/multi-tenant-teams/deploying-team-tenant) # Creating new tenants Source: https://octopus.com/docs/tenants/guides/multi-tenant-teams/creating-new-tenants.md The first step in this guide is to create the three teams needed for this scenario as tenants: - Team Avengers - Team Radical - QA To create your tenants follow these steps: 1. Select **Tenants** from the main navigation and click the **Add tenant** button: :::figure ![](/docs/img/shared-content/tenants/images/add-new-tenant.png) ::: 2. Select if you want to **Add blank tenant** or **Clone an existing tenant**: :::figure ![](/docs/img/shared-content/tenants/images/blank-or-clone-tenant.png) ::: 3. Enter the name you want to use for the tenant and click the **Save** button: :::figure ![](/docs/img/shared-content/tenants/images/creating-new-tenant.png) ::: Repeat this process for each of the tenants, and then move on to the next section in the guide. Previous     Next # Tenants sharing machine targets Source: https://octopus.com/docs/tenants/guides/tenants-sharing-machine-targets.md This guide introduces a pattern for deploying the same application per tenant to the same machine target, either a Tentacle or SSH connection. A common issue with this pattern is that the [deployment mutex](https://octopus.com/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously) can cause deployment tasks to spend a lot of time checking and waiting for the mutex to be released. This can lead to an inefficient use of the task queue, especially as the number of tenants sharing the target grows. In this guide, we'll use a [tenant tag set](https://octopus.com/docs/tenants/tenant-tags) to represent the hosting groups and connect the tenants to the shared infrastructure. The tag set will also be used to set the `Octopus.Task.ConcurrencyTag` system variable to limit the number of tasks that can be processed concurrently per hosting group. We're essentially building a rolling deployment over our tenants. Get Started ## Guide contents The following sections make up the guide: - [Creating the tenant tag set](/docs/tenants/guides/tenants-sharing-machine-targets/creating-the-tenant-tag-set) - [Assign tags to tenants](/docs/tenants/guides/tenants-sharing-machine-targets/assign-tags-to-tenants) - [Assign tags to targets](/docs/tenants/guides/tenants-sharing-machine-targets/assign-tags-to-targets) - [Deploying before setting the concurrency tag](/docs/tenants/guides/tenants-sharing-machine-targets/deploying-before-concurrency-tag) - [Setting the concurrency tag](/docs/tenants/guides/tenants-sharing-machine-targets/setting-the-concurrency-tag) - [Deploying after setting the concurrency tag](/docs/tenants/guides/tenants-sharing-machine-targets/deploying-after-concurrency-tag) - [Summary](/docs/tenants/guides/tenants-sharing-machine-targets/summary) # Tenanted deployments Source: https://octopus.com/docs/tenants/tenant-creation/tenanted-deployments.md Each project can control its interaction with tenants. By default, the multi-tenant deployment features are disabled. You can allow deployments with/without a tenant, which is a hybrid mode that is useful when you are transitioning to a fully multi-tenant project. There is also a mode where you can require a tenant for all deployments, which disables untenanted deployments for that project. You can change the setting for tenanted deployments for a project by navigating to the project's settings and changing the selected option under **Multi-tenant Deployments**: :::figure ![](/docs/img/tenants/tenant-creation/images/multi-tenant-project-settings.png) ::: Tenanted deployments will be enabled when [connecting tenants to a project](/docs/projects/tenants/bulk-connection). ## Tenanted and untenanted deployments {#tenanted-and-untenanted-deployments} On the deployment screen, if you choose **Tenanted** from the **Tenants** option, you are performing a [**tenanted deployment**](https://octopus.com/use-case/tenanted-deployments) - deploying a release of a project to an environment for one or more tenants. :::figure ![](/docs/img/tenants/tenant-creation/images/multi-tenant-deploy-to-tenants.png) ::: When you perform a tenanted deployment, the selected tenant can impact the entire process, including which steps are run, which variable values are used, and which deployment targets are included, all depending on your deployment design. Also, note that Octopus will create a deployment per-tenant. If you select 20 tenants, Octopus will create 20 separate deployments, one for each tenant. Each of those deployments will execute in its own task. When you choose **one or more environments** to deploy to, you are performing an **untenanted deployment**. This is the same kind of deployment Octopus always performs, where you deploy a release to an environment where there is no tenant for the deployment. There will be no tenant influence on the deployment process. :::figure ![](/docs/img/tenants/tenant-creation/images/multi-tenant-deploy-multiple-environments.png) ::: When you first enable multi-tenant deployments, you won't have any tenants, and we don't want that to stop you from deploying your existing projects. Perhaps you are using an environment-per-tenant model and will migrate to tenants over some time, so some deployments will start to have a tenant while others do not. # Tenant types Source: https://octopus.com/docs/tenants/tenant-types.md Tenants in Octopus can represent multiple use cases: - Software as a Service (SaaS) - Geographical regions or data centers - Developers, testers, or teams - Feature branches This section covers some of the common tenancy types you can model with Octopus. ## Software as a Service (SaaS) {#saas} Software as a Service (SaaS) is perhaps the most common implementation of multi-tenancy with Octopus Deploy. The SaaS model is where the same software is deployed to multiple customers (tenants). The deployment process can include items such as custom branding per tenant or scoping specific steps to tenant tags so that modules can be deployed to tenants that have purchased them. :::figure ![](/docs/img/tenants/images/saas-tenants.png) ::: Tenants using the SaaS model typically fall into three distinct categories: - Code based - Database based - Isolated ### Code-based multi-tenancy {#saas-code-based} Code-based tenancy is where all tenants share the same infrastructure and it's up to the code to determine what a tenant sees and has access to. Code based multi-tenancy can be the easiest option to deploy and maintain, but there are tradeoffs. There is a risk for cross tenant contamination. A missed "where" clause makes for a terrible day. You also cannot deploy different versions of the application to different tenants. ### Database multi-tenancy {#saas-database} Database based tenancy is similar to Code based, however, each tenant has their own database. Deploying the application is easy, but deploying the database changes can be harder than the first approach. Database changes need to be deployed to all databases prior an application update. Having a large number of databases will create a bottleneck in the deployment. ### Isolated multi-tenancy {#saas-isolated} Isolated is where each tenant has their own, dedicated infrastructure. This removes the risk of cross tenant data contamination, but it complicates deployments. The deployment process itself may not change, but the application now has to be deployed per tenant. Each deployment has to configure the application instance for that tenant. The complication to the deployment process offers flexibility and scalability to the application and its deployments. Tenants can now be upgraded independently and be hosted on hardware that fits their needs. Learn more about how to to configure multi-tenancy for a SaaS application in Octopus with our [multi-tenant SaaS guide](/docs/tenants/guides/multi-tenant-saas-application). ## Geographical regions or data centers {#regions} Another pattern for multi-tenancy is to treat geographic regions of the same organization as tenants. Using this model, something like an e-commerce application can test out new or beta features in a specific region before releasing them out to the rest of the organization. Scheduling deployments during a maintenance window is another way this pattern can be used as each region may have different hours when they are least busy. :::figure ![](/docs/img/tenants/images/region-tenants.png) ::: Learn more about how to to configure multi-tenancy for regions in Octopus with our [multi-tenant regions guide](/docs/tenants/guides/multi-tenant-region). ## Teams {#teams} Concurrent application development is another multi-tenant use-case. Using tenants for teams allows an Octopus administrator to re-use the same deployment process as well as environments giving each team the autonomy to deploy. You can assign tenant tags or dedicate specific tenants to deployment targets so each team can deploy without affecting the other. :::figure ![](/docs/img/tenants/images/team-tenants.png) ::: Learn more about how to to configure multi-tenancy for teams in Octopus with our [multi-tenant teams guide](/docs/tenants/guides/multi-tenant-teams). # Secret variables Source: https://octopus.com/docs/best-practices/platform-engineering/secret-variables.md Octoterra interacts with Octopus via the Octopus API. One of the security features built into the Octopus API is that it does not return sensitive values. This means Octoterra can not export the values of any secrets, such as the value assigned secret variables. There are two ways to import projects that contain secret variables: 1. Define all the values as Terraform variables when calling `terraform apply` 2. Apply the Terraform module with Octopus and use variable substitution to inject secret values during deployment ## Supplying space level resource secret values Space level resources such as accounts, certificates, feeds, Git credentials, and targets may include secret values. These values must be passed to the `terraform apply` command or set to dummy values to allow the resources to be created without knowing the secret values beforehand. Unlike project sensitive variables, it is not possible to have Octopus automatically inject these values during deployment. Teams can choose to enable the `Default Secrets to Dummy Values` option in the `Octopus - Serialize Space to Terraform` step to export a Terraform module with all secret values set to a placeholder string. For example, this is an account exported with the `Default Secrets to Dummy Values` option enabled: ```hcl resource "octopusdeploy_aws_account" "account_aws_account" { name = "AWS Account" description = "" environments = [] tenant_tags = ["TenantType/InternalCustomer"] tenants = [] tenanted_deployment_participation = "TenantedOrUntenanted" access_key = "ABCDEFGHIJKLMNOPQRS" secret_key = "${var.account_aws_account}" depends_on = [octopusdeploy_tag_set.tagset_tenanttype,octopusdeploy_tag.tag_internalcustomer] lifecycle { ignore_changes = [secret_key] } } variable "account_aws_account" { type = string nullable = false sensitive = true description = "The AWS secret key associated with the account AWS Account" default = "Change Me!" } ``` Note the default value of the Terraform variable `account_aws_account` is `Change Me!`. All other secret values have similar placeholders. Also note that the `lifecycle` meta-argument on the account resource is set to `ignore_changes = [secret_key]`. This indicates that Terraform will not reapply the placeholder secret value if the target resource was manually updated after it was created. This means that the space level resources exported by the `Octopus - Serialize Space to Terraform` step with the `Default Secrets to Dummy Values` option enabled can be applied without having to supply all the secret values. It is then expected that the newly created Octopus resources are updated manually with the correct values. If the `Default Secrets to Dummy Values` option is disabled, no default value will be defined for the terraform variables, and you must pass values for these variables to `terraform apply`. For example: ```bash -var=account_aws_account=TheAwsAccountSecretKey ``` :::div{.hint} You may wish to define all space level secrets in a variable set in the upstream space, exclude the variable set from being exported, and pass the variable set values to the `terraform apply` argument when deploying space level resources. ::: ## Supplying project secret variable values Octoterra exposes every secret value it exports as a Terraform variable. These variables can then be defined when running `terraform apply`. All project secret variables are defined in files with the prefix `project_variable_sensitive_`. These files then define a pair of Terraform blocks. The first is a Terraform variable: ```hcl variable "eks_octopub_frontend_secret_value_1" { type = string nullable = false sensitive = true description = "The secret variable value associated with the variable Secret.Value" default = "#{Secret.Value}" } ``` The second is the Octopus variable: ```hcl resource "octopusdeploy_variable" "eks_octopub_frontend_secret_value_1" { owner_id = "${octopusdeploy_project.project_eks_octopub_frontend.id}" name = "Secret.Value" type = "Sensitive" sensitive_value = "${var.eks_octopub_frontend_secret_value_1}" is_sensitive = true lifecycle { ignore_changes = all } } ``` The value of the secret variable is then defined by passing an argument like `-var=eks_octopub_frontend_secret_value_1=SecretValueGoesHere` to `terraform apply`. ## Injecting secret values during deployment Octoterra formats the Terraform sensitive variable default values to allow them to be replaced by Octopus. If you look at the example sensitive variable resource listed in the previous section, you'll see the default value is set to `#{Secret.Value}`. This Octostache template can be replaced by Octopus when the Terraform module is deployed with the `Apply a Terraform template` step. Note that the `Octopus - Populate Octoterra Space` step templates are based on the `Apply a Terraform template` step, and are configured to replace Octostache template syntax in files matching the pattern `**/project_variable_sensitive*.tf`. There are, however, some special considerations that must be taken into account to ensure a project can inject all secret variables when deployed downstream: 1. A dedicated environment must be used for deploying downstream projects (this documentation and step templates assume an environment called `Sync`) 2. All sensitive variables must have a single value 3. All sensitive variables must be available to the `Sync` environment Following these rules ensures the Octostache templates defining the default value of a sensitive variables have a single, unambiguous value injected into them when they are deployed. ### The Sync environment Dedicating an environment to the process of serializing and deploying downstream projects allows the upstream environment to scope sensitive variables such that: * They are made available when deploying downstream projects * They dot no leak into any regular deployment environments This documentation and the step templates assumes this environment is called `Sync`. The `Sync` environment must not appear in the lifecycle of regular deployments, which ensures any variables scoped to the `Sync` environment do not leak into regular deployments. Octoterra excludes the `Sync` environment from the variable scopes in exported projects. This ensures the downstream projects do not rely on the `Sync` environment. ### Sensitive variables with single values Any sensitive values in the upstream project must have one value assigned to them. For example, if you had a sensitive variable for a database password, and the value was unique per environment, it must be captured as three variables e.g.: * `Dev.Database.Password` scoped to the `Dev` and `Sync` environments * `Test.Database.Password` scoped to the `Test` and `Sync` environments * `Production.Database.Password` scoped to the `Production` and `Sync` environments These three variables can then be referenced by a non-sensitive variable scoped to all three environments: * `Database.Password` set to `#{Dev.Database.Password}` and scoped to the `Dev` environment * `Database.Password` set to `#{Test.Database.Password}` and scoped to the `Test` environment * `Database.Password` set to `#{Production.Database.Password}` and scoped to the `Production` environment The deployment process can then reference `#{Database.Password}` to receive the environment scoped sensitive variable during deployment. ### Sensitive variables made available to the Sync environment All sensitive variables must be available to the `Sync` environment. This means: * Sensitive variables have no scope * Sensitive variables scoped to any environments must also be scoped to the `Sync` environment This ensures the steps deploying downstream projects have access to all sensitive variables, and replace the Octostache template syntax in files matching the pattern `project_variable_sensitive*.tf` with the correct value. # Isolated Octopus Servers Source: https://octopus.com/docs/installation/isolated-octopus-deploy-servers.md Octopus was designed to be a single, central point of truth for application deployments. In an ideal world, you would only need one Octopus Server, and then many Tentacles. Octopus uses a [secure communication channel](/docs/security/octopus-tentacle-communication/) when communicating with remote endpoints, and can work in both [listening and polling mode](/docs/infrastructure/deployment-targets/tentacle/windows), giving you multiple options to work around firewall issues. Of course, the real world and the ideal world don't always overlap, and you might need to have separate Octopus Servers. Common examples are: - Solution providers with an internal Octopus Server for pre-production deployments while developing a solution, and then Octopus Servers managed by the client for production deployments, on different networks - When company policies require production and pre-production environments to be on completely isolated networks, like PCI-compliant environments. Learn about [PCI Compliance and Octopus Deploy](/docs/security/pci-compliance-and-octopus-deploy). On this page, we discuss two different scenarios, and the features and options that exist for dealing with them. ## Tentacle can't be installed (offline deployments) {#IsolatedOctopusDeployservers-Tentaclecan'tbeinstalled(offlinedeployments)} > Chris's Consulting are developing an application for a government client. They're using Octopus internally to manage pre-production deployments (dev, UAT, and so-on). However, the client have advised that they won't allow the consultancy to install the Tentacle agent on their production servers, nor the Octopus Server. They'd prefer the consultancy to provide them with a something they can run from a USB stick. You can configure an [offline package drop deployment target](/docs/infrastructure/deployment-targets/offline-package-drop). This allows you to "deploy" to a location on the filesystem and take that deployment offline to be used elsewhere. The dropped package contains everything you need to deploy to a location offsite. ## Tentacle can be installed (Isolated Octopus Servers) {#IsolatedOctopusDeployservers-Tentaclecanbeinstalled(isolatedOctopusservers)} > A credit card processing gateway have decided to use Octopus to manage deployments. For PCI-compliance reasons, the production environment is required to be on a different network to the pre-production environments, and very little is shared. Since they own the servers, they can install the Octopus Servers and Tentacles on each environment, but they just can't share an Octopus Server between environments. In this scenario, the customer would install different instances of Octopus in both environments. To keep settings in sync and to automate between environments, refer to the [documentation on keeping instances in sync](/docs/administration/sync-instances). :::div{.success} **Friendly multi-instance licensing model** Your Octopus Deploy license includes the ability to install and configure up to three (3) separate instances of Octopus Server to support scenarios like this one. ::: ## Tentacle can be installed but communication must go via a proxy {#IsolatedOctopusDeployservers-Tentaclecanbeinstalledbutcommunicationmustgoviaaproxy} > An agency manages lots of small applications on behalf of their customers, and wants to use Octopus to manage deployments. Quite often the production environment is managed by the customer and even after being convinced to allow the Tentacle agent to be installed on their servers, they want communication to be controlled by a proxy server. In this scenario you would install Tentacle onto the customer's servers, but configure all communication to go via the customer's proxy server. Learn about [proxy support](/docs/infrastructure/deployment-targets/proxy-support) in Octopus Deploy. # octopus account create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-create.md Create an account in Octopus Deploy ```text Usage: octopus account create [flags] Aliases: create, new Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Project dashboard Source: https://octopus.com/docs/projects/project-dashboard.md The project dashboard gives you an at-a-glance view of your project. You’ll see a cherry-picked **selection of your releases**. You’ll also see where and when they were deployed in your [environments](/docs/infrastructure/environments), [tenants](/docs/tenants/), and [channels](/docs/releases/channels). You’ll see one of the following **dashboard views**: - [Default view](#default-view) - [Channels view](#channels-view) - [Tenants view](#tenants-view) The view you see depends on whether you’ve **added channels** to your project and whether you **allow tenanted deployments** in your project. All views allow you to filter and group your deployments to suit your needs. ## Default view This view is shown if: - You **haven’t** added channels to your project - Your project settings **don’t** allow tenanted deployments You’ll see a selection of releases and how they are deployed into environments in your **project’s lifecycle**. From this view you can deploy releases, create releases, and filter by environments. :::figure ![Project dashboard default view](/docs/img/projects/dashboard/project-dashboard-default.jpeg) ::: ### Which releases are shown in the default view? - For each **environment**: - Your most recently deployed release (whether or not the deployment was successful) - Your next most recently deployed and successful release - In **addition** to the above: - Up to three of your most recent undeployed releases (but only if they were created after you last deployed a new release) ## Channels view This view is shown if: - You’ve **added a channel** to your project - Your project settings **don’t** allow tenanted deployments You’ll see a selection of releases for each channel and how they are deployed into environments in each **channel’s lifecycle**. From this view you can deploy releases in your channels, create releases, and filter by environments. :::figure ![Project dashboard channels view](/docs/img/projects/dashboard/project-dashboard-channels.jpeg) ::: ### Which releases are shown in the channels view? - For each **environment**: - Your most recently deployed release, whether or not the deployment was successful - Your next most recently deployed and successful release - For each **channel**: - Up to three of your most recent releases (this count includes any already shown releases) ## Tenants view This view is shown if: - Your project settings **allow** tenanted deployments You’ll see **all** of your tenants in your project and a selection of releases deployed to them. From this view you can filter by release and then deploy the selected release into each tenant’s environments. :::figure ![Project dashboard tenants view](/docs/img/projects/dashboard/project-dashboard-tenants.jpeg) ::: ### Which releases are shown in the tenants view? - For each **tenant and environment combination**: - The most recently deployed release is shown. - If untenanted deployments are allowed: - For each **environment**: - The most recently deployed, untenanted release is shown. ### Alternative tenants views - Group by dropdown: - When **no grouping** is selected: - The environments shown are from your **project’s lifecycle**. - When **grouping by channel** is selected: - Each channel’s environments are from that **channel’s lifecycle**. - Project settings for tenanted and untenanted deployments: - **Both tenanted and untenanted** deployments are allowed: - All untenanted deployments are summarized in the first row, labeled ‘Untenanted’. - **Only tenanted** deployments are allowed: - No untenanted deployments are shown on the dashboard. # octopus account delete Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-delete.md Delete an account in Octopus Deploy ```text Usage: octopus account delete { | } [flags] Aliases: delete, del, rm, remove Flags: -y, --confirm Don't ask for confirmation before deleting the account. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account delete octopus account rm ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # External feed triggers in Octopus Source: https://octopus.com/docs/projects/project-triggers/external-feed-triggers.md By configuring your Octopus project with container dependencies, you can now create triggers that watch those repositories for new packages, container images and Helm charts pushed by your build tool. This feature enables pull-driven deployments for Kubernetes steps. Based on tags and version rules, triggers detect if an image appears that is later than the image used in your previous release. Octopus then automatically creates a new release with all the latest container images or Helm Chart dependencies. Your existing [lifecycles](/docs/releases/lifecycles/) will then promote that release through your [environments](/docs/infrastructure/environments) or [tenants](/docs/tenants), just like it does currently. If your lifecycle uses automatic release progression, then you've just set up a Continuous Delivery pipeline without explicitly letting Octopus know about your application changes! The details of these container images and Helm Charts are already known in Octopus. This means we can use the registry locations, image names, chart names, and credentials to do this monitoring, without adding or maintaining this information anywhere else. ## Common use cases - [Automated deployments with Helm charts](/docs/deployments/kubernetes/helm-update#setting-up-referenced-images-with-helm-chart-deployments) Create releases when any referenced images used in your Helm charts are updated. - [Tracking third party Helm charts](/docs/kubernetes/tutorials/automatically-track-third-party-helm-charts) Create releases whenever a third party releases a new Helm chart. - [Deployments with YAML manifests](/docs/deployments/kubernetes/deploy-raw-yaml#referencing-packages) Create releases for a deployment referencing any number of images. ## Getting started {#ExternalFeedTriggers-GettingStarted} Navigate to your project and click **Triggers**. Click **Add Trigger** on the right-hand side of the page, and select **External feed**. Enter a name and description for your trigger. The name should be short, memorable, and unique. Example: Nginx Docker Update. ## Channels and lifecycles If your project contains multiple [channels](/docs/releases/channels), you have the option of selecting which channel this trigger will apply to. Any pushed packages must satisfy the selected channel's [versioning rules](/docs/releases/channels#version-rules) to trigger release creation. The releases created by the trigger will use this channel. The versions used for those releases is guided by [release versioning](/docs/releases/release-versioning) under **Settings**. They will use the rules defined there. Unlike the existing [built-in package repository triggers](/docs/projects/project-triggers/built-in-package-repository-triggers) (formerly Automatic Release Creation), you can create multiple external feed triggers per project. This can enable you to automatically create releases for multiple channels. :::figure ![Channel selection](/docs/img/projects/project-triggers/images/external-trigger-channel.png) ::: A preview of the [lifecycle](/docs/releases/lifecycles) used by the selected channel is displayed. You can modify the [lifecycle's phases](/docs/releases/lifecycles/#Lifecycles-LifecyclePhases) to have a release created and deployed to selected environments whenever a new package is pushed. ## Trigger sources Any container images or Helm Charts referenced in your project's deployment process can be selected to trigger release creation. Please note that for [configuration as code](/docs/projects/version-control/config-as-code-reference) projects, only container images and Helm Charts in the deployment process from the **default branch** are able to be referenced. Any changes to the deployment process in other branches will not be available for use in external feed triggers. :::figure ![Package selection](/docs/img/projects/project-triggers/images/external-feed-trigger-packages.png) ::: ## History The history section contains information about the last time the trigger was evaluated and the last release that was created by the trigger. By default, triggers are evaluated every three minutes and results will be reported here. - Outcome: Tells you if there was any action taken, or if there was an error during processing. - Reason: Additional information about the outcome. - Last executed at: The time the task was run. - Discovered packages: A full list of watched packages, container images or Helm charts and the versions that were found in this execution. If the trigger has created a release, a link to the created release will be shown alongside the date it was created. :::figure ![Trigger history](/docs/img/projects/project-triggers/images/external-feed-trigger-history.png) ::: If required, more detailed information can be found in the system task logs. ## Advanced use cases Feeds or packages referenced using variable substitution are able to be leveraged with external feed triggers. They will only be used by the trigger, however, if they are evaluated as either container or Helm Chart repositories, and also do not use variable unavailable at release creation time. For example, using an environment name variable will not work, because that value is only available at the time of deployment. If you have a chain of dependencies with your external feed packages, make sure your trigger uses the package which will be pushed to its repository last. ## Troubleshooting {#ExternalFeedTriggers-Troubleshooting} When you are using external feed triggers there are a few reasons why a release may not be created successfully. Take some time to consider the following troubleshooting steps: 1. **Inspect the task list** for errors in the **Task** menu - Octopus will log the reason why external feed triggers failed as errors or warnings. Note that external feed triggers are system tasks, and do not display in the list by default. Use the **Show advanced filters** option and select **Include system tasks** to show them. 2. Ensure you are pushing the package to a **supported external feed type**. While capability has been verified against most major Docker providers, compatibility is not guaranteed - please contact Octopus Deploy support if you encounter any problems. 3. Ensure that packages in the external feed match the [channel rules](/docs/releases/channels#version-rules) if defined for the trigger's channel (or the default channel if your project doesn't have multiple channels). **Triggers will only create a new release if the packages match channel rules.** 4. Ensure you are pushing a **new version** of the package - Octopus will not create a release where the package has already been used for creating a release. 5. Ensure you are pushing a package that Octopus will consider as the **latest available package**. The trigger's version evaluator uses SemVer, and will not trigger off image tags such as 'latest'. 6. Make sure that the feed and package references only use variables which are **able to be evaluated at release creation time.** For example, the environment name variable is not available, because it is only known at the time of deployment. 7. If you have a **chain of package dependencies** with your external feed packages, make sure your trigger uses the package which will be **pushed to its repository last**. Otherwise, some of the packages required for the release may be missing. 8. As [mentioned above](/docs/projects/project-triggers/external-feed-triggers#trigger-sources), for [configuration as code](/docs/projects/version-control/config-as-code-reference) projects, only container images and Helm Charts in the deployment process from the **default branch** are able to be referenced. Any changes to the deployment process in other branches will not be available for use in external feed triggers. ## Learn more Take a look at the [Octopus Guides](https://octopus.com/docs/guides) which covers building and packaging your application, creating releases and deploying to your environments for your CI/CD pipeline. # Move the Octopus home folder and the Tentacle home and application folders Source: https://octopus.com/docs/administration/managing-infrastructure/moving-your-octopus/move-the-octopus-home-folder-and-the-tentacle-home-and-application-folders.md ## Move the Octopus home folder {#MovetheOctopusHomefolderandtheTentacleHomeandApplicationfolders-MovetheOctopusHomefolder} :::div{.problem} Make sure you have a **current backup** of your Octopus data before proceeding. You will also need your **Master Key** if you need to use the backup, so please copy that also! ::: Occasionally it may be necessary to change the location at which Octopus stores its data (called the "Octopus Home" folder) as well as the Registry Key which defines the Octopus Server instance. This can be done using the command-line on the Octopus Server. A PowerShell script showing the steps is set out below. You need to change the variables to match your Octopus installation, and you may wish to run each step separately to deal with any issues like locked files. :::div{.hint} **Administrator Rights Required** The following commands will need to be run as Administrator as they require access to the Registry. N.B. The delete-instance command will not actually delete any files, just the Registry key referring to the configuration file. This is a safe operation which will not delete your Octopus Server data. ::: ```powershell $oldHome = "C:\Octopus" $newHome = "C:\YourNewHomeDir" $octopus = "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" $newConfig = $newHome + "\OctopusServer.config" & "$octopus" service --stop mv $oldHome $newHome & "$octopus" delete-instance --instance=OctopusServer & "$octopus" create-instance --instance=OctopusServer --config=$newConfig & "$octopus" configure --home="$newHome" & "$octopus" service --start ``` ## Move the Tentacle home and application folders {#MovetheOctopusHomefolderandtheTentacleHomeandApplicationfolders-MovetheTentacleHomeandApplicationfolders} Occasionally it may be necessary to change the location at which a Tentacle stores its data (called the "Tentacle Home" and "Tentacle Applications" folder) as well as the Registry Key which defines the Tentacle instance. This can be done using the command-line on the machine where the Tentacle is installed. A PowerShell script showing the steps is set out below. You need to change the variables to match your Tentacle installation, and you may wish to run each step separately to deal with any issues like locked files. Default Tentacle instances are named *Tentacle*. You can find your instance names by running the [Tentacle.exe list-instances](/docs/octopus-rest-api/tentacle.exe-command-line/list-instances) command. :::div{.hint} **Administrator rights required** The following commands will need to be run as Administrator as they require access to the Registry. N.B. The delete-instance command will not actually delete any files, just the Registry key referring to the configuration file. This is a safe operation which will not delete your Tentacle data. ::: ```powershell ##Config## $instance = "InstanceName" #Name of the instance. $oldHome = "C:\Octopus\$instance" #Current home of the instance. $newHome = "C:\NewHome\$instance" #New home path for the instance. $appFolder = "Applications" #Name of the folder being used for applications. ##Process## $oldConfig = Get-Item "$oldHome\*.config" $newConfig = "$newHome\$($oldConfig.name)" $tentacleExe = "C:\Program Files\Octopus Deploy\Tentacle\Tentacle.exe" # Stop the current Tentacle service & "$tentacleExe" service --instance $instance --stop #Copy Tentacle configuration and application files from OldHome to NewHome new-item $newHome -type directory -Force $source = $oldHome + "\*" copy-item -Recurse $source $newHome # Delete the current Tentacle instance & "$tentacleExe" delete-instance --instance $instance # Create the new Tentacle instance with its new configuration file & "$tentacleExe" create-instance --instance $instance --config $newConfig # Configure the Tentacle's Home folder & "$tentacleExe" configure --home $newHome --instance $instance # Configure Tentacle's Application folder. Next line assumes app folder is a child of home folder $appFolder = "$newHome\$appFolder" & "$tentacleExe" configure --app $appFolder --instance $instance # Start the new Tentacle service & "$tentacleExe" service --instance $instance --start write-host "The source folder $oldHome was not removed. You need to do that manually after testing." -ForegroundColor yellow ``` # Debugging PowerShell scripts on remote machines with Octopus Source: https://octopus.com/docs/deployments/custom-scripts/debugging-powershell-scripts/debugging-powershell-scripts-on-remote-machines.md This guide provides details on how to debug PowerShell scripts while they are being deployed by Octopus Deploy to remote machines. This guide demonstrates connecting via IP address to an untrusted machine on a public network. Some steps may be omitted when connecting to machines on the same subnet or domain. ## Configuring PowerShell remoting PowerShell remoting must be enabled on the remote machine and configured for SSL and the trust established between the remote machine and the debugging machine. To enable PowerShell remoting on the remote machine: ```powershell Enable-PSRemoting -SkipNetworkProfileCheck -Force ``` To establish trust between the debugging machine and the remote machine let's configure remoting over SSL. The remote machine requires a certificate, an HTTPS listener and a firewall rule to allow incoming requests on port 5986: ```powershell $dnsName = "55.555.55.555" # The IP address you are using to connect to the machine $certificate = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "$dnsName" New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $certificate.Thumbprint –Force New-NetFirewallRule -DisplayName "Windows Remote Management (HTTPS-In)" -Name "Windows Remote Management (HTTPS-In)" -Profile Any -LocalPort 5986 -Protocol TCP ``` We also need to export the certificate so that it can be trusted by the debugging machine: ```powershell Export-Certificate -Cert $certificate -FilePath "C:\remoting-certificate.cer" ``` In order to connect to the remote machine, the debugging machine must add the certificate to its Trusted Root Certification Authorities. Copy the exported certificate (`remoting-certificate.cer`) from the remote machine to the machine that will be doing the debugging. Import the certificate into Trusted Root Certification Authorities: ```powershell Import-Certificate -Filepath "C:\remoting-certificate.cer" -CertStoreLocation "Cert:\LocalMachine\Root" ``` ## Setting up Octopus for PowerShell debugging Create a project with a "Run a Script" step that contains some PowerShell. For example: ```powershell $sampleDebugValue = 45 Write-Host "$sampleDebugValue" ``` PowerShell debugging is enabled by adding the project variable `Octopus.Action.PowerShell.DebugMode` and setting the value to `true`. Now, create a release and deploy it. The deployment will pause while waiting for a PowerShell debugger to attach. ## Starting the PowerShell debug session The deployment in Octopus outputs the information required to start debugging the PowerShell script. If we have name resolution configured we could connect to the machine using the name indicated by Octopus, but in this instance we will use the machine's IP address. First we must start a session with the remote computer. Open PowerShell ISE and run the following: ```powershell $ipAddress = "55.555.55.555" # The IP address of the remote machine $credentials = Get-Credential Enter-PSSession -ComputerName $ipAddress -UseSSL -Credential $credentials ``` Once the session is established we can connect to the PowerShell process and start debugging. The information provided in the Octopus deployment log can be used here: ```powershell Enter-PSHostProcess -Id 3720 Debug-Runspace -Id 2 ``` PowerShell ISE will open a window showing the script currently executing on the remote machine. You can step through the script using `F10` to step over and `F11` to step in. :::figure ![Debugging remote PowerShell scripts](/docs/img/deployments/custom-scripts/debugging-powershell-scripts/debugging-powershell-scripts-debug.png) ::: When you are finished debugging, run to the end of the script and the deployment will be complete. ## Learn more - [PowerShell blog posts](https://octopus.com/blog/tag/powershell/1) # octopus account gcp Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-gcp.md Manage Google Cloud accounts in Octopus Deploy ```text Usage: octopus account gcp [command] Available Commands: create Create a Google Cloud account help Help about any command list List Google Cloud accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account gcp [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account gcp list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Capture a crash dump Source: https://octopus.com/docs/support/capture-a-crash-dump.md When something goes wrong in Octopus we may ask you to provide a Crash Dump to help us diagnose the problem. Most Octopus Servers or agents will be in a production environment so you may not want to install any software. Windows comes with the Windows Error Reporting Service or WER which you can configure to automatically record dumps of certain processes (If you don't mind installing software you can also use [DebugDiag](http://blogs.msdn.com/b/chaun/archive/2013/11/12/steps-to-catch-a-simple-crash-dump-of-a-crashing-process.aspx) [Download: [Microsoft Debug Diagnostic Tool](https://www.microsoft.com/en-us/download/details.aspx?id=49924)] but this article focuses on WER). To enable crash dumps for Octopus you'll need to add a registry key for the Octopus process. The following code can be saved to a .reg file to automatically update the necessary registry keys **RecordOctopusDump.reg** ``` Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps\Octopus.Server.exe] "DumpFolder"=hex(2):43,00,3a,00,5c,00,44,00,75,00,6d,00,70,00,73,00,00,00 ; C:\Dumps "DumpCount"=dword:00000002 ; 2 (At most 2 dumps will be saved to disk) "DumpType"=dword:00000002 ; 2 (Full Dump) ``` If you'd like to check the other options for these settings refer to the [Microsoft documentation](http://msdn.microsoft.com/en-us/library/windows/desktop/bb787181(v=vs.85).aspx). After you run the .reg file, if you want to check the entries in regedit, it should look like this: :::figure ![](/docs/img/support/images/3278137.png) ::: Once you have a dump they'll then be written to `C:\Dumps\` named something similar to `Octopus.Server.exe.6127.dmp`. Next just zip the dump and upload it to the link that we'll have provided you. # octopus account gcp create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-gcp-create.md Create a Google Cloud account in Octopus Deploy ```text Usage: octopus account gcp create [flags] Aliases: create, new Flags: -d, --description string A summary explaining the use of the account to other users. -D, --description-file file Read the description from file -e, --environment stringArray The environments that are allowed to use this account -K, --key-file string The json key file to use when authenticating against Google Cloud. -n, --name string A short, memorable, unique name for this account. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account gcp create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Process dumps Source: https://octopus.com/docs/support/process-dumps.md For some problems, such as unresponsive servers/Tentacles and hung tasks, providing a dump or dump analysis of the Octopus Server and/or Tentacle process may speed up diagnosis and resolution. ## Create a process dump A process dump consists of all the memory the process is currently using. This includes deployment variables, credentials and certificates. Creating a process dump will pause the process for anywhere between a few seconds to a few minutes, depending on the amount of memory in use and the disk speed. :::div{.hint} Due to the nature of data contained in a process dump, we take great care in handling these files and will provide a secure upload facility. We will also delete them as soon as they have been analyzed. ::: ## Creating process dumps on Windows If you are capturing a process on your Octopus Server follow the below instructions: 1. Right-click on the task bar and select **Task Manager**. 1. Select the **Details** tab. 1. Find the relevant process. In this case **Octopus.Server.exe** 1. Right-click on it and select **Create dump file**. 1. Note where the file is saved (generally in your temp folder). :::div{.warning} When capturing a process dump for **Tentacle.exe**, please also capture any child **Calamari.exe** processes. To do this, follow the process below. ::: We recommend using [Process Explorer](https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer) to capture child processes associated with Tentacle.exe, such as the Calamari.exe process. To capture child processes for Tentacle: 1. On the Tentacle that is having the issues. Download and install [Process Explorer](https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer) from Microsoft. 1. Once installed you need to **run the program as an administrator** by right-clicking on the relevant procexp.exe file and selecting `Run as Administrator`. 1. Once opened you need to make sure the process tree is shown by clicking on the `View` menu on the top navigation bar and selecting `Show Process Tree`. 1. You will notice the program looks similar to task manager. Navigate to the **tentacle.exe** process in the list of tasks. 1. Run the process that is causing the issue/fault in Octopus (ie run the deployment or task that is failing). 1. Once that task is running in Octopus go back to Process Explorer on the Tentacle and you will now see the tentacle.exe process can be expanded to show the Calamari process. This can be expanded further to see the powershell.exe processes associated with both the tentacle.exe and calamari.exe. 1. To capture the dump file for calamari.exe make sure your tentacle.exe process is expanded in process explorer and find calamari.exe. 1. Right-click on it and select `Create Dump` and then `Create Full Dump`. 1. Note where the file is saved (generally in your temp folder). :::div{.hint} Sometimes the deployment in Octopus can complete or error out too quickly, which means you do not get a chance to capture the failing process. If this is happening [get in touch with us](https://octopus.com/support) and we can suggest some workarounds to make the process last longer so you can capture the dump correctly. ::: :::figure ![Process explorer capturing child processes from Tentacle](/docs/img/support/images/processexplorer.png) ::: ## Creating process dumps on Linux To aid in debugging, process dumps on Linux should be captured using `createdump` or the `dotnet-dump` tool. Debuggers such as `gcore` and `gdb` produce dumps that are not portable, leading to difficulties diagnosing issues. `createdump` is included with installations of the dotnet runtime. You will first need to install the dotnet runtime by [following the instructions on the dotnet download page](https://dotnet.microsoft.com/en-us/download). Here is an example of a manual installation: ```bash wget https://builds.dotnet.microsoft.com/dotnet/Runtime/8.0.15/dotnet-runtime-8.0.15-linux-x64.tar.gz DOTNET_FILE=dotnet-runtime-8.0.15-linux-x64.tar.gz export DOTNET_ROOT=~/.dotnet mkdir -p "$DOTNET_ROOT" && tar zxf "$DOTNET_FILE" -C "$DOTNET_ROOT" export PATH=$PATH:$DOTNET_ROOT ``` Once the dotnet runtime has been installed, locate the installation directory by running `dotnet --list-runtimes`. `createdump` is located in the runtime directory, but requires the id of the process you intend to create a dump for. Use `ps ax` to list the running processes on the machine, taking note of the process PID. To find Calamari, for example, use `ps ax | grep Calamari`. Then, capture a dump of the process using `createdump`: ```bash > dotnet --list-runtimes Microsoft.NETCore.App 8.0.15 [/home/ec2-user/.dotnet/shared/Microsoft.NETCore.App] > ps ax | grep Calamari 14220 ? Ssl 0:00 /home/ec2-user/.octopus/ec2-3-25-60-213-ap-southeast-2-compute-amazonaws-com/Tools/Calamari.linux-x64/27.3.5-hotfix0001/Calamari run-script 14266 pts/0 S+ 0:00 grep --color=auto Calamari > /home/ec2-user/.dotnet/shared/Microsoft.NETCore.App/8.0.15/createdump 14220 [createdump] Gathering state for process 14220 Calamari [createdump] Writing minidump with heap to file /tmp/coredump.14220 [createdump] Written 157319168 bytes (38408 pages) to core file [createdump] Target process is alive [createdump] Dump successfully written in 6296ms ``` ## Dump file analysis This process creates an analysis file from a process dump file. This analysis file contains a limited set of information outlining the current state of the application. This file can contain connection strings, Tentacle thumbprints, project, step and machine names. It should not contain sensitive variables or certificates. For our purposes it contains for which threads are running and where they are in the code. :::div{.hint} This process can be performed on a different computer to the one the dump file was captured on ::: 1. Download and install the [Debug Diagnostics tools](https://www.microsoft.com/en-us/download/details.aspx?id=49924) from Microsoft. 1. Run `DebugDiag Analysis` from the start menu. 1. Check `CrashHangAnalysis`. 1. Click `Add Data Files` and select the dump file. 1. Click `Start Analysis`. 1. Wait. The result will open in Internet Explorer. Note the location of the file, which is usually in `\DebugDiag\Reports`. # Report on deployments using SQL Source: https://octopus.com/docs/administration/reporting/report-on-deployments-with-sql.md If your reporting tool of choice can't consume the XML feed, you can query the SQL table directly. Octopus maintains a **DeploymentHistory** table, with the exact same information that the XML feed exposes. This may work better for tools like **SQL Server Reporting Services**. The main benefit of this approach is that it supports Spaces in reporting by default. :::div{.warning} This approach is only supported for self-hosted Octopus. For Octopus Cloud you'll need to use the API. :::: :::figure ![](/docs/img/administration/reporting/images/sql.png) ::: A few notes about accessing the table directly: - We may add additional columns in the future. - We'll try not to change existing columns, but just in case, you may wish to set up your own View in SQL server to provide an abstraction layer. - Since you're accessing the data directly, be aware that Octopus team permissions won't apply. - Don't join with any other tables - these are much more likely to change in future, so you're on your own if you do! The table is completely denormalized, and should have any information that you might need to report on. ## How often is the data updated? The data in the table (and exposed by the feed) updates every time any data related to deployments changes. This includes changes such as changing the name of a project or environment, or changing the version number of a release. The data should always be up-to-date, however if Octopus is performing many operations, data could be stale up to several minutes. Also note that the data: - Isn't deleted by retention policies, so you can report on historical deployments even if retention policies clean them up. - Isn't deleted when a project/environment is deleted. ## What about information on concurrent users, web front-end performance, etc.? You may want to look at [enabling HTTP logging](/docs/administration/managing-infrastructure/performance/enable-web-request-logging). ## Learn more - [Reporting blog posts](https://octopus.com/blog/tag/reporting/1) # Certificate chains in Octopus Source: https://octopus.com/docs/deployments/certificates/certificate-chains.md Uploaded PFX or PEM files may contain a certificate chain. i.e. A certificate with a private-key, plus one or more authority certificates. Certificates which contain a chain are indicated by a chain icon on the certificate card, as shown below: :::figure ![](/docs/img/deployments/certificates/images/certificate-chain-card.png) ::: The details page will show the details of all certificates in the chain: :::figure ![](/docs/img/deployments/certificates/images/certificate-chain-details.png) ::: ## Importing certificate chains When a certificate-chain is imported to one of the Windows Certificate Stores (either via the [Import Certificate Step](/docs/deployments/certificates/import-certificate-step) or by using the certificate in an IIS HTTPS Binding) the authority certificates will be automatically imported into the CA or Root stores (Root if the authority certificate is self-signed, CA otherwise as it is an intermediate authority). _Note:_ Authority certificates will always be imported to the LocalMachine location, even if the subject certificate is imported to a user-specific location. This is because importing to the Root store for a specific user results in a security-prompt being displayed, which obviously doesn't work with automated deployments. ## Downloading certificate chains When downloading a certificate containing a chain, the behavior depends on the format being downloaded. - `Original`: The downloaded file will be exactly what was originally uploaded. - `PFX`: The entire chain will be included in the exported file. - `DER`: Only the subject certificate will be included. DER files never contain chains. - `PEM`: Download-dialog provides options to include: - Primary Certificate - Primary and Chain Certificates - Chain Certificates Only ![Download Chain in PEM format dialog](/docs/img/deployments/certificates/images/download-pem-chain.png) # Automatic approvals Source: https://octopus.com/docs/deployments/databases/common-patterns/automatic-approvals.md We recommend including DBAs in the automated database deployment process. If something goes wrong in `Production` at 1 AM, they are the ones who are paged. The [manual approvals documentation](/docs/deployments/databases/common-patterns/manual-approvals) walks through how to include DBAs in the deployment process in Octopus Deploy. The concern with manual approvals is scalability. It is common for us to see a 15-to-1 or 20-to-1 ratio of developers to DBAs. The number of approvals a DBA is involved in will exponentially grow as the number of teams and projects automate their database deployments increases. Schema change commands are the biggest concern. Thankfully, the SQL language defines those commands. Most database deployment tools, Flyway, DBUp, RoundhousE, Redgate, or DacPac, generate *what-if* or *dry-run* reports. It is possible to write a script that looks for specific commands, and when one is found, run a manual intervention. The format of the *what-if* report depends on the tool. The general auto-approval process looks something like this: 1. Generate the *what-if* report using the database deployment tooling. Save the report to a shared location for easier access. 2. Run a script to: 1. Open up the *what-if* report. 2. Loop through a list of schema change commands, such as `Drop Table`, `Create Table`, `Drop Column`, `Alter Table`, `Drop User`. 3. If a schema change command is found set a DBA Approval Required [output variable](/docs/projects/variables/output-variables) to `True`. 4. If no schema change command is found set the same DBA Approval Required [output variable](/docs/projects/variables/output-variables) to `False`. 3. Notify the approvers when that DBA Approval Required [output variable](/docs/projects/variables/output-variables) is `True` using [run conditions](/docs/projects/steps/conditions/#run-condition). 4. Pause for a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals/) when that DBA Approval Required [output variable](/docs/projects/variables/output-variables) is `True` using [run conditions](/docs/projects/steps/conditions/#run-condition). 5. Deploy database changes. 6. Send notifications on the status of deployments. :::figure ![IMage showing an example auto approve deployment process](/docs/img/deployments/databases/common-patterns/images/auto_approve_deployment_process.png) ::: ## Output variables and run conditions We recommend creating a variable in the project to reference the output variable from the auto-approval step. A variable referencing the output variable makes it easier to change if the auto-approval step names change. For instance, `#{Octopus.Action[Auto-Approve Delta Report].Output.DBAApprovalRequired}`. :::figure ![Image showing the auto approve output variable](/docs/img/deployments/databases/common-patterns/images/auto_approve_output_variable_variable.png) ::: Creating a variable also makes it much easier to use in a [run condition](/docs/projects/steps/conditions/#run-condition): :::figure ![](/docs/img/deployments/databases/common-patterns/images/auto_approve_run_conditions.png) ::: :::div{.hint} We recommend setting the output variable to `True` or `False` because that is what the [run conditions](/docs/projects/steps/conditions/#run-condition) look for. If you need an if/then statement, then in include it in the auto-approval script. ::: ## Logging We recommend the auto-approval step write logs using `Write-Host` for PowerShell or `echo` for Bash scripts. That output is captured by Octopus Deploy and can be viewed in the `Task Log` tab on the deployment screen. When we've debugging scripts, the more logging, the better. For important logs, such as when a command is found, leverage the [write highlight](/docs/deployments/custom-scripts/logging-messages-in-scripts) command. That is a custom command Octopus Deploy injects into the deployment process. Using that command will show the message on the task summary screen. :::figure ![](/docs/img/deployments/databases/common-patterns/images/auto_approve_write_highlight.png) ::: ## Example View a working example on our [samples instance](https://samples.octopus.app/app#/Spaces-106/projects/dbup-sql-server-worker-pool-variable-type/deployments/process). # Recommended database permissions Source: https://octopus.com/docs/deployments/databases/configuration/permissions.md When you decided on the permissions required to automate your database deployments, you'll need to find the balance between functionality and security. Below are some considerations around permissions and a couple of recommendations. ## Application account permissions Applications should run using a unique service account with the least amount of rights. Each environment for each application should have a unique user account. Having separate service accounts for each environment can make automated database deployments very tricky, especially when the accounts require a username and password. None of the user account should be stored in source control, instead, assign permissions to roles, and attach the correct service account for the environment to that role. ## Deployment permission considerations The service account used to make schema changes requires elevated permissions. Because of that, create a special user account to handle database deployments. Do not use the same service account used by the application. If an application's service account can modify the schema and it was ever compromised, it could do quite a bit of damage. The level of elevated permissions is up to you. More restrictions placed on the deployment service account means more manual steps. Deployments will fail due to missing or restricted permissions. Octopus will provide the error message to fix the issue, but it will need manual intervention to resolve the issue. It is up to you to decide which is best. ## Recommendations Following DevOps principles, everything that can be automated should be automated. That includes creating databases, user management, schema changes, and data changes. Octopus Deploy plus the third-party tool of your choice can handle that. The deployment service account being used for database deployments should have ownership of the database. Not ownership of the server, just the database. That level of permission does open the door to concerns about giving the process too much power. Please read this post: [how to add manual interventions and auto-approvals](https://octopus.com/blog/autoapprove-database-deployments) to your database deployment process. # SQL Server permissions Source: https://octopus.com/docs/deployments/databases/sql-server/permissions.md When you decided on the permissions required to automate your database deployments, you need to find the balance between functionality and security. Below are some considerations for permissions and a couple of recommendations. ## Application account permissions Applications should run under their own service accounts with the least amount of rights. Each environment for each application should have its own service account. Having separate service accounts for each environment can make automated database deployments tricky. None of the service account should be stored in source control, instead, assign permissions to roles, and attach the correct user for the environment to that role. ## Deployment permission considerations The account used to make schema changes requires elevated permissions. Because of that, create a special service account to handle database deployments. Do not use the same account used by an application. If the application service account has permissions to modify the schema, and it is compromised, it could cause a lot of damage. The level of elevated permissions is up to you, more restrictions placed on the deployment account means more manual steps. Deployments will fail due to missing or restricted permissions, but Octopus will provide the error message to help you fix the issue, however, it will need manual intervention to resolve the issue. You need to decide which approach suites your scenario. First, decide what the deployment account should have permission to do at the server level. From there, research which server roles are applicable. Microsoft has provided a chart of the server roles and their specific permissions. :::figure ![](https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/media/permissions-of-server-roles.png?view=sql-server-ver15) ::: Next, decide what permissions the deployment account can have at the database level. Again, Microsoft has provided a chart of the database roles and their specific permissions. :::figure ![](https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/media/permissions-of-database-roles.png?view=sql-server-ver15) ::: With those two charts in mind, below are some recommended permissions sets. ## Fully automated database deployments permission recommendation Following DevOps principles, everything that can be automated should be automated. This includes creating databases, user management, schema changes, and data changes. Octopus Deploy plus the third-party tool of your choice can handle that. The deployment account should have these roles assigned: - Server permissions: - `dbcreator`: Permission to create new databases. - `securityadmin`: Permission to create new users and grant them permissions (you will need a check-in place to ensure it doesn't grant random people sysadmin roles). - Database Permissions: - `db_ddladmin`: Permission to run any Data Definition Language (DDL) command in a database. - `db_datareader`: Permission to read all the data from all user tables. - `db_datawriter`: Permission to add, delete, or change data from all user tables. - `db_backupoperator`: Permission to backup the database. - `db_securityadmin`: Permission to modify role membership and manage permissions. - `db_accessadmin`: Permission to add or remove access to the database for logins. - Grant `View any definition`. Be sure to assign the deployment account those database roles in the model database. That is the system database used by SQL Server as a base when a new database is created. This means the deployment account will be assigned those roles going forward. ## Fully automated database deployments permission recommendation {#SQLServerdatabases-ManualUsers} Security admins should be treated the same as system admins, as they can grant permissions at the server level. For security purposes, it is common to see that role restricted. In that case, below are the recommended permissions. It can do everything except create a new SQL Login. - Server permissions: - `dbcreator`: Permission to create new databases. - Database Permissions: - `db_ddladmin`: Permission to run any Data Definition Language (DDL) command in a database. - `db_datareader`: Permission to read all the data from all user tables. - `db_datawriter`: Permission to add, delete, or change data from all user tables. - `db_backupoperator`: Permission to backup the database. - `db_securityadmin`: Permission to modify role membership and manage permissions. - `db_accessadmin`: Permission to add or remove access to the database for logins. - Grant `View any definition`. ## No database creation or user creation, everything else automated permission recommendation If granting that level of access is not workable or allowed, we recommend the following. It requires SQL users to be manually created and the database to already exist. The process can add existing users to databases as well as deploy everything. - Database permissions: - `db_ddladmin`: Permission to run any Data Definition Language (DDL) command in a database. - `db_datareader`: Permission to read all the data from all user tables. - `db_datawriter`: Permission to add, delete, or change data from all user tables. - `db_backupoperator`: Permission to backup the database. - `db_securityadmin`: Permission to modify role membership and manage permissions. - `db_accessadmin`: Permission to add or remove access to the database for logins. - Grant `View any definition`. ## Manual user creation both server and database permission recommendation Here are the most restrictive permissions for automating database deployments. No new database users can be created. No new schemas can be created. Users cannot be added to roles. Table and stored procedure changes can be made. - Database permissions: - `db_ddladmin`: Permission to run any Data Definition Language (DDL) command in a database. - `db_datareader`: Permission to read all the data from all user tables. - `db_datawriter`: Permission to add, delete, or change data from all user tables. - `db_backupoperator`: Permission to backup the database. - Grant `View any definition`. # Rolling back an NGINX deployment Source: https://octopus.com/docs/deployments/patterns/rollbacks/nginx.md [NGINX](https://nginx.org) is a popular web server for Linux deployments. This guide will cover how to roll back a [Node.js](https://nodejs.org/en/) application accessed through NGINX. In this example, NGINX is a reverse proxy to a Node.js service running as a [systemd service](https://wiki.debian.org/systemd/Services). The application has two components: - Database - Website Rolling back the database is out of scope for this guide. This [article](https://octopus.com/blog/database-rollbacks-pitfalls) describes reasons and scenarios in which rolling back a database could result in data loss or incorrect data. This guide assumes that there are no database changes or the changes are backward compatible. Because the database changes are out of scope for rollbacks, the database package will be *skipped* during the rollback process. :::div{.hint} While this guide is for Node.js, the same process can be used for any framework, language or platform NGINX supports. ::: ## Existing deployment process The existing deployment process is: 1. Deploy to MongoDB. 1. Deploy to NGINX. 1. Verify Application. 1. Notify Stakeholders. :::figure ![Original deployment process for Node.js application](/docs/img/deployments/patterns/rollbacks/nginx/images/rollback-nginx-original-process.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/01-octofx-original/deployments/process). Please log in as a guest. ::: ## Zero-configuration rollback The easiest way to rollback to a previous version is to: 1. Find the release you want to roll back. 2. Click the **REDEPLOY** button next to the environment you want to roll back. That redeployment will work because a snapshot is taken when you create a release. The snapshot includes: - Deployment Process - Project Variables - Referenced Variables Sets - Package Versions Re-deploying the previous release will re-run the deployment process as it existed when that release was created. By default, the deploy package steps (such as deploy to IIS or deploy a Windows Service) will extract to a new folder each time a deployment is run, perform the [configuration transforms](/docs/projects/steps/configuration-features/structured-configuration-variables-feature/), and [run any scripts embedded in the package](/docs/deployments/custom-scripts/scripts-in-packages). :::div{.hint} Zero Configuration Rollbacks should work for most our customers. However, your deployment process might need a bit more fine-tuning. The rest of this guide is focused on disabling specific steps during a rollback process. ::: ## Rollback process For most rollbacks, the typical strategy is to skip the database step while re-deploying the Node.js application website. In addition, a rollback indicates something is wrong with a release, so we'd want to [prevent that release from progressing](/docs/releases/prevent-release-progression). The updated deployment process will be: 1. Calculate Deployment Mode 1. Deploy to MongoDB (skip during rollback) 1. Deploy to NGINX 1. Block Release Progression 1. Verify the Application 1. Notify stakeholders :::figure ![simple rollback for Windows deployment](/docs/img/deployments/patterns/rollbacks/nginx/images/rollback-nginx-simple-rollback.png) ::: :::div{.success} View the deployment process on our [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/bestbags-rollback/deployments/process). Please log in as a guest. ::: ### calculate deployment mode Calculate Deployment Mode is a [community step template](https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5/actiontemplate-calculate-deployment-mode) created by Octopus Deploy. It compares the release number being deployed with the current release number for the environment. When the release number is greater than the current release number, it is a deployment. When it is less, then it is a rollback. The step templates sets a number of [output variables](/docs/projects/variables/output-variables), including ones you can use in variable run conditions. ### Skip database deployment step The database deployment step should be skipped during a rollback. Unlike code, databases cannot easily be rolled back without risking data loss. For most rollbacks, you won't have database changes. However, a rollback could accidentally be triggered with a database change. For example, rolling back a change in **Test** to unblock the QA team. Skipping these steps during the rollback reduces the chance of accidental data loss. To skip these steps during a rollback, set the variable run condition to be: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnDeploy} ``` We also recommend adding or updating the notes field to indicate it will only run on deployments. :::figure ![Windows updating notes field](/docs/img/deployments/patterns/rollbacks/nginx/images/rollback-nginx-notes-field.png) ::: ### Block release progression Blocking Release Progression is an optional step to add to your rollback process. [The Block Release Progression](https://library.octopus.com/step-templates/78a182b3-5369-4e13-9292-b7f991295ad1/actiontemplate-block-release-progression) step template uses the API to [prevent the rolled back release from progressing](/docs/releases/prevent-release-progression). This step includes the following parameters: - Octopus Url: #{Octopus.Web.BaseUrl} (default value) - Octopus API Key: API Key with permissions to block releases - Release Id to Block: #{Octopus.Release.CurrentForEnvironment.Id} (default value) - Reason: This can be pulled from a manual intervention step or set to `Rolling back to #{Octopus.Release.Number}` This step will only run on a rollback; set the run condition for this step to: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` To unblock that release, go to the release page and click the **UNBLOCK** button. # SSH target requirements Source: https://octopus.com/docs/infrastructure/deployment-targets/linux/ssh-requirements.md Before you can configure your [SSH deployment targets](/docs/infrastructure/deployment-targets/linux/ssh-target), they must meet the [requirements](/docs/infrastructure/deployment-targets/linux/#requirements) for a Linux server and the following additional requirements: - It must be accessible through SSH and SFTP (See [creating an SSH Key Pair](/docs/infrastructure/accounts/ssh-key-pair/#Creating-a-SSH-Key-Pair)). ## Bash startup files When connecting to a target over SSH, the Octopus Server connects then executes the script via the `/bin/bash` command to ensure it is running with a bash shell (and not the default terminal shell for that user). Any login scripts that you wish to run should therefore be put into the `.bashrc` script file since this is invoked for non-login shells. For example, with targets on a Mac the default $PATH variable may be missing `/usr/sbin`. This can be added in the `.bashrc` script with the line: ``` PATH=$PATH:/usr/sbin ``` If the `.bashrc` file doesn't already exist, create it in the user folder of the user that is connecting to the Max OSX instance. If the remote user is called `octopus`, then this file will be located at `/Users/octopus/.bashrc`. See the Bash Reference Manual, section [6.2 Bash Startup Files](http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files) for more information about startup scripts. ## .NET {#dotnet} [Calamari](/docs/octopus-rest-api/calamari) is the command-line tool that is invoked to perform the deployment steps on the deployment target. It runs on .NET and is built as a [.NET Core self-contained distributable](https://docs.microsoft.com/en-us/dotnet/core/deploying/#self-contained-deployments-scd). Since it is self-contained, .NET Core does not need to be installed on the target server. However, there are still some [pre-requisite dependencies](https://learn.microsoft.com/en-us/dotnet/core/install/linux-scripted-manual#dependencies) required for .NET Core itself that must be installed. ## Python Octopus can execute Python scripts on SSH targets provided the following criteria are met: - Python is version 3.4+ - Python3 is on the path for the SSH user executing the deployment - pip is installed or the pycryptodome python package is installed ## Learn more - Configure your [SSH deployment targets](/docs/infrastructure/deployment-targets/linux/ssh-target) - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # Use IIS as a reverse proxy for Octopus Deploy Source: https://octopus.com/docs/installation/load-balancers/use-iis-as-reverse-proxy.md There are scenarios in which you may be required to run Octopus Deploy behind a reverse proxy, such as compliance with specific organization standards, or a need to add custom HTTP headers. This document outlines how to use Microsoft's Internet Information Services (IIS) as that reverse proxy, using [URL Rewrite](https://www.iis.net/downloads/microsoft/url-rewrite) and [Application Request Routing](https://www.iis.net/downloads/microsoft/application-request-routing) (ARR). This example assumes: - IIS will terminate your SSL connections. - [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) are not required. Our starting configuration: - Octopus Deploy installed and running on For guidance on this topic, see [Installing Octopus](/docs/installation). - Valid SSL certificate installed in the Local Certificate store. For guidance on this topic, please follow [Importing your SSL certificate](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https/#import-ssl-certificate). - IIS Management Console installed. For guidance on this topic, please follow [this Microsoft Docs article](https://docs.microsoft.com/en-us/iis/install/installing-iis-85/installing-iis-85-on-windows-server-2012-r2). At the end of this walk-through, you should be able to: - Communicate with Octopus Deploy over a secure connection. - Set and verify a custom HTTP header with IIS. :::figure ![](/docs/img/administration/high-availability/design/images/create-server-farm.png) ::: ## Install URLRewrite and ARR URLRewrite and Application Request Routing are provided by the [Microsoft Web Platform Installer](https://www.microsoft.com/web/downloads/platform.aspx). After installing the Web Platform Installer, search for "URL Rewrite" and "Application Request Routing", and install. Alternatively, use the following PowerShell snippet: ```powershell $downloadUrl = "https://download.microsoft.com/download/8/4/9/849DBCF2-DFD9-49F5-9A19-9AEE5B29341A/WebPlatformInstaller_x64_en-US.msi" $downloadTarget = ([uri]$downloadUrl).segments | select -last 1 Invoke-WebRequest $downloadUrl -OutFile $env:tmp\$downloadTarget Start-Process $env:tmp\$downloadTarget '/qn' -PassThru | Wait-Process Set-Location ($env:ProgramFiles + "\Microsoft\Web Platform Installer") .\WebPICmd.exe /Install /Products:'UrlRewrite2,ARRv3_0' /AcceptEULA /Log:$env:tmp\WebPICmd.log ``` ## Configure SSL on default web site 1. Open the IIS Management Console (`inetmgr.exe`). 1. Navigate to the Default Web Site. 1. In the action pane, click on **Bindings**. 1. Click **Add**. 1. Select **https**. 1. A drop-down box will appear with your installed certificates displayed. 1. Select your installed certificate. If you don't see your certificate listed, refer back to [Microsoft Article](https://learn.microsoft.com/en-us/dotnet/framework/wcf/samples/iis-server-certificate-installation-instructions). 1. Optional: Fill in your correct IP address and/or hostname, and click **OK**. 1. Optional: Remove the HTTP (non-SSL) binding - this is a recommended security practice. ## Verify SSL is correctly configured In a web browser, navigate to (note the 's'). You should see the IIS default page displayed in your browser. :::figure ![IIS Default Page](/docs/img/security/exposing-octopus/images/default-page.png) ::: ## Configure URLRewrite :::div{.success} After installing URLRewrite and ARR, you may need to restart IIS and/or the IIS Management Console to ensure that the URLRewrite icon appears correctly ::: Open the IIS Management Console (`inetmgr.exe`). Navigate to the Default Web Site. Click on the URLRewrite icon to bring up the URLRewrite interface. In the action pane, click on "Add Rule(s)". Under "Select a Rule Template", choose "Reverse Proxy". ![Adding a Reverse Proxy Rule in URL Rewrite](/docs/img/security/exposing-octopus/images/addrules.png). If you have never enabled reverse proxy functionality before, you'll be prompted to enable it. In the "Add Reverse Proxy Rules" dialog, specify the URL of your backend Octopus Server in "Inbound Rules". In our example, this is `server_name:8080`. Select "Enable SSL offloading". Click OK. :::figure ![Configuring a Reverse Proxy Rule](/docs/img/security/exposing-octopus/images/rprules.png) ::: :::div{.success} There is no need to specify outbound rules, as the Octopus Portal always uses relative links. ::: Click OK and close down all dialogs. You should now be able to navigate to https://servername/ in your browser and log in to Octopus Deploy. :::div{.warning} **Polling Tentacles are not supported with this scenario** Polling Tentacles communicate with the Octopus Server over an end-to-end encrypted channel. This solution does not currently support polling Tentacles. ::: ## Example: Add a custom HTTP header in IIS Open the IIS Management Console (`inetmgr.exe`). Navigate to the Default Web Site. In the Main window, navigate to "HTTP Response Headers". In the action pane, click "add". In the dialog, enter the following. - Name: `x-octopus-servedby` - Value: `IIS` Click OK. ## Verify the custom HTTP header Open a PowerShell prompt. Type the following command (replacing 'server_name' as appropriate): ```powershell Invoke-WebRequest https://server_name | select -expand Headers ``` You should see your `x-octopus-servedby` header listed in the returned headers. # octopus account gcp list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-gcp-list.md List Google Cloud accounts in Octopus Deploy ```text Usage: octopus account gcp list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account gcp list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Built-in package repository triggers Source: https://octopus.com/docs/projects/project-triggers/built-in-package-repository-triggers.md Formerly known as ***Automatic Release Creation***. Functionality remains the same. ## Getting started {#BuiltInPackageRepositoryTriggers-GettingStarted} If you use the [built-in Octopus package repository](/docs/packaging-applications/package-repositories), you can now select a package, that when uploaded will automatically create a release. :::div{.warning} **Built-in repository only** This trigger only supports the [built-in package repository](/docs/packaging-applications/package-repositories). There is some support for external feeds using the [external feed triggers](/docs/projects/project-triggers/external-feed-triggers). ::: From the project's trigger tab, click **Add Trigger**, and then select **Built-in package repository**. As a project can contain multiple packages you need to select the package that will upload *last* in your build and push CI process. If you have multiple packages, make sure you select the package that is always uploaded last. :::figure ![Built-in package repository package selection](/docs/img/projects/project-triggers/images/built-in-package-repository-package-selection.png) ::: When a release is set to be created this way, the audit will tell you that is how the release was created. :::figure ![Built-in package repository release history](/docs/img/projects/project-triggers/images/built-in-package-repository-release-history.png) ::: If you combine uploading a package with the automatic deployment feature within [lifecycle phases](/docs/releases/lifecycles/#Lifecycles-LifecyclePhases), you can push a package to the internal repository, create a release, and have it automatically deploy. :::div{.hint} The release number that is created is guided by the Release Versioning setting under **Settings**. ::: ## Channels {#BuiltInPackageRepositoryTriggers-Channels} You must select the [channels](/docs/releases/channels) that will be used for any automatically created releases. This means that **only one channel for each project can have a built-in package repository trigger enabled at any one time.** This can be painful, and here are some points you can consider: - Use one of the [build-server extensions](/docs/packaging-applications/build-servers/), or the [Octopus CLI](/docs/octopus-rest-api/octopus-cli/create-release) to create releases instead of using triggers - this will automatically determine the best channel based on the release being created - Choose the channel that will be used most commonly for automatically creating releases, and create releases manually for the other channels. - Try creating some releases manually for the selected channel to make sure it works as expected. ## Automatically creating pre-releases {#BuiltInPackageRepositoryTriggers-AutomaticallyCreatingPreReleases} When you push a package to your trigger step, Octopus will look for the latest available package for all other steps **excluding pre-release packages by default** - see [this thread](https://help.octopus.com/t/arc-not-working-with-pre-release-builds/3646) for background. One way to work around this behavior is to create a channel with the appropriate version rules so that "latest available package" will be the pre-release packages you expected. The best way to test this is to practice creating releases manually for that channel - the "latest available package" will work the same way for manual and automatically created releases. ## Troubleshooting {#BuiltInPackageRepositoryTriggers-Troubleshooting} When you are using built-in package repository triggers there are many reasons why a release may not be created successfully. Take some time to consider the following troubleshooting steps: 1. **Inspect the server logs** for warnings in **Configuration ➜ Diagnostics** - Octopus will log the reason why the automatic release creation failed as errors or warnings. 2. Ensure you are pushing the package to the **built-in package repository** - use [external feed triggers](/docs/projects/project-triggers/external-feed-triggers) if you are pushing packages to other feeds. 3. Ensure you have **configured the built-in package repository trigger** for the project based on the **correct package**. 4. When using channels ensure you have **configured the built-in package repository trigger for the desired channel**. 5. Ensure you are pushing a **new version** of the package - Octopus will not create a release where the package has already been used for creating a release. 6. Ensure you are pushing a package that Octopus will consider as the **latest available package** - see the section above on [automatically creating pre-releases](#BuiltInPackageRepositoryTriggers-AutomaticallyCreatingPreReleases). 7. Ensure the release creation package step **does not** use variables for the PackageId - Octopus will only create a release where the package is constant. 8. When a release has **multiple packages**, ensure you configure the built-in package repository trigger to use the **last package that is pushed to the built-in repository** - otherwise some of the packages required for the release will be missing. 9. When using channels the package **must satisfy the version rules** for the channel being used for the built-in package repository trigger - try creating some releases manually. 10. Are you pushing **pre-release** packages? See the section above on [automatically creating pre-releases](#BuiltInPackageRepositoryTriggers-AutomaticallyCreatingPreReleases). 11. Ensure the account pushing the package has the required permissions for **each** of the **projects** and **environments** that will be involved in creating (and potentially deploying) the release. Consider which of the following permissions may be required depending on your circumstances: - `BuiltInFeedPush` - `DeploymentCreate` - `EnvironmentView` - `FeedView` - `LibraryVariableSetView` - `LifecycleView` - `MachineView` - `ProcessView` - `ReleaseCreate` - `VariableView` :::div{.hint} **Consider using a build server extension** We have [extensions/plugins](/docs/packaging-applications/build-servers/) available for the most popular build servers. These extensions will help you [create packages](/docs/packaging-applications), [push those packages to the built-in repository](/docs/packaging-applications/package-repositories/built-in-repository/#pushing-packages-to-the-built-in-repository), create releases and deploy them to your environments: - [AppVeyor](/docs/packaging-applications/build-servers/appveyor) - [Azure DevOps & Team Foundation Server](/docs/packaging-applications/build-servers/tfs-azure-devops) - [Bamboo](/docs/packaging-applications/build-servers/bamboo) - [BitBucket Pipelines](/docs/packaging-applications/build-servers/bitbucket-pipelines) - [Continua CI](/docs/packaging-applications/build-servers/continua-ci) - [Jenkins](/docs/packaging-applications/build-servers/jenkins) - [TeamCity](/docs/packaging-applications/build-servers/teamcity) ::: ## Learn more Take a look at the [Octopus Guides](https://octopus.com/docs/guides) which covers building and packaging your application, creating releases and deploying to your environments for your CI/CD pipeline. # Channels Source: https://octopus.com/docs/releases/channels.md As you deploy your projects, you can assign [releases](/docs/releases) of projects to specific channels. This is useful when you want releases of a project to be treated differently depending on the criteria you've set. Without channels, you could find yourself duplicating projects in order to implement multiple release strategies. This would, of course, leave you trying to manage multiple duplicated projects. Channels lets you use one project, with multiple release strategies. Channels can be useful in the following scenarios: - Feature branches (or experimental branches) are deployed to test environments but not production. - Early access versions of the software are released to members of your early access program. - Hot-fixes are deployed straight to production and then deployed through the rest of your infrastructure after the fix has been released. When you are implementing a deployment process that uses channels you can scope the following to specific channels: - [Lifecycles](#control-deployment-lifecycle) - [Steps](#modify-deployment-process) - [Variables](#variables) - [Tenants](#deploy-to-tenants) You can also define rules per channel to ensure that only package versions and Git resources which meet specific criteria are deployed to specific channels. ## Managing channels Every [project](/docs/projects) has a default channel. Channels are managed from the Projects page by selecting the specific project you are working with and clicking **Channels**. As you add more channels, you'll notice that they are arranged in alphabetical order on the channels page. ### Channel Types There are two types of channel: - Lifecycle channels: Releases in this channel will progress through the lifecycle defined for the channel. - Ephemeral Environment channels: Releases in this channel will be deployed to ephemeral environments. The environment will be provisioned automatically when it is first deployed to. A project can only have one ephemeral environment channel. ## Create a new lifecycle channel 1. From the Channels page, click on the **Add Channel** button. 2. Select **Lifecycle** for the channel type 3. Give the channel a name and add a description. The channel name must be unique within the project. 4. Select the [lifecycle](/docs/releases/lifecycles/) the channel will use, or allow the channel to inherit the default lifecycle for the project. See the [lifecycle docs](/docs/releases/lifecycles) for information about creating new lifecycles. 5. If you want to make this the default channel for the project, click the **Default channel** check-box. 6. Configure the [channel rules](#channel-rules). - [Package version](#version-rules) will be used to enforce which versions of your packages are deployed to this channel - [Git protection rules](#git-protection-rules) will be used to control the use of files from Git repositories during deployments 7. Configure any [custom fields](#custom-fields) you want to require when creating releases in this channel. ## Create a new ephemeral environment channel Ephemeral Environment channels are designed to work with [ephemeral environments](/docs/projects/ephemeral-environments). When a release is deployed to an ephemeral environment channel, the environment will be provisioned automatically if it does not already exist. 1. From the Channels page, click on the **Add Channel** button. 2. Select **Ephemeral Environment** for the channel type 3. Give the channel a name and add a description. The channel name must be unique within the project. 4. If you want to make this the default channel for the project, click the **Default channel** check-box. 5. Select the [parent environment](/docs/projects/ephemeral-environments#parent-environment). 6. Select whether you want to [automatically deploy](/docs/projects/ephemeral-environments#auto-deploy) to the environment when a release is created. 7. Provide a [name template](/docs/projects/ephemeral-environments#naming) for the ephemeral environment. 8. Configure the [channel rules](#channel-rules). - [Package version](#version-rules) will be used to enforce which versions of your packages are deployed to this channel - [Git protection rules](#git-protection-rules) will be used to control the use of files from Git repositories during deployments 9. Configure any [custom fields](#custom-fields) you want to require when creating releases in this channel. ### Channel rules Channels allow to you to configure rules to ensure that package versions and Git resources that meet specific criteria can be deployed using the channel. When creating a release for a channel with rules, an option can be configured on the project to allow the channel rules to be ignored. This option is disabled by default on new projects, but can be enabled in project settings. ### Package version rules {#version-rules} Package version rules assist in selecting the correct versions of packages for the channel. They are only used when creating a release, either manually or via [project triggers](/docs/projects/project-triggers). :::div{.hint} Version rules will work best when you follow [Semantic Versioning (SemVer 2.0.0)](http://semver.org) for your versioning strategy. ::: 1. When viewing a channel, click **Add rule** in the Package Version Rules section. 2. Select the package step(s) (and as such the packages) the version rule will be applied to. 3. Enter the version range in the **Version range** field. You can use either [Nuget](https://oc.to/NuGetVersioning) or [Maven](https://oc.to/MavenVersioning) versioning syntax to specify the range of versions to include. You can use the full semantic version as part of your version range specification. For example: `[2.0.0-alpha.1,2.0.0)` will match all 2.0.0 pre-releases (where the pre-release component is `>= alpha.1`), and will exclude the 2.0.0 release. 4. Enter any pre-release tags you want to include. Following the standard 2.0.0 [SemVer syntax](http://semver.org/), a pre-release tag is the alphanumeric text that can appear after the standard *major.minor.patch* pattern immediately following a hyphen. Providing a regex pattern for this field allows the channel to filter packages based on their tag in a very flexible manner. The [SemVer build metadata](https://semver.org/#spec-item-10) will also be evaluated by the regex pattern. Some examples are. | **Pattern** | **Description** | **Example use-case** | | --- | --- | --- | | \^[\^\\+].* | matches any pre-release | Enforce inability to push to production by specifying lifecycle that stops at staging | | ^(\|\\+.*)$ | matches any non pre-release, but allows build metadata | Ensure a script step only runs for non pre-release packages | | ^$ | matches versions with no pre-release or metadata components | Official releases are filtered to have nothing other than core version components (e.g. 1.0.0 ) | | ^beta | matches pre-releases like `beta` and `beta0003` | Deploy pre-releases using a lifecycle that goes directly to a pre-release environment | | beta | matches pre-releases with beta anywhere in the tag like `beta` and `my-beta` | Deploy pre-releases using a lifecycle that goes directly to a pre-release environment | | ^(?!beta).+ | matches pre-releases that don't start with beta | Consider anything other than 'beta' to be a feature branch package so you can provision short-term infrastructure and deploy to it | | ^bugfix- | matches any with `*bugfix-*` prefix (e.g. `bugfix-sys-crash`) | Bypass Dev & UAT environments when urgent bug fixes are made to the mainline branch and to be released straight from Staging to Production | | ^beta | matches pre-releases which begin with `beta` but *not* metadata containing `beta` | Prevent SemVer metadata from inadvertently matching the rule | :::div{.hint} If adding a pre-release tag to channels, you will also need to add the tag `^$` to your `default` channel ::: 5. Click **Design rule**. The **Design Version Rule** window will show a list of the packages that will deployed as part of the deploy package step selected earlier. The versions of the packages that will deployed in this channel with the version rules you've designed will be highlighted in green, and the versions of the packages that will not be deployed with be shown in red. You can continue to edit the version rules in this window. :::figure ![Design version rule](/docs/img/releases/channels/images/channel-design-version-rule.png) ::: 6. Click **Save**. ### Git protection rules {#git-protection-rules} Git protection rules allow you to control the use of files from Git repositories during deployments, ensuring that important environments such as Production are protected. They are used when creating a release, either manually or via [project triggers](/docs/projects/project-triggers). #### External repository rules You can use external repository rules to restrict which branches and tags can be used for steps that source files from an external Git repository. 1. When viewing a channel, click **Add rule** in the Git Protection Rules section. 2. Select the step(s) that use external Git repositories the rule will be applied to. 3. Enter patterns (separated by commas) to restrict which branches and/or tags can be selected when creating releases. Wildcard characters can be used, see [Glob patterns in Git rules](#git-rules-glob-patterns) for more information. 4. Click **Save**. :::figure ![External repository rules example](/docs/img/releases/channels/images/external-repository-rules.png) ::: #### Project repository (version-controlled projects) For [version-controlled](/docs/projects/version-control) projects, you can use rules to restrict which branches and tags can be used as the source of the deployment process and variables when creating a release. 1. When viewing a channel, expand the **Project Repository** section. 2. Enter patterns (separated by commas) to restrict which branches and/or tags can be selected when creating releases. Wildcard characters can be used, see [Glob patterns in Git rules](#git-rules-glob-patterns) for more information. 3. Click **Save**. When patterns are entered, a sample of the matching branches/tags from the Git repository used by the project will be shown to help in configuring the rules. :::figure ![Project repository example](/docs/img/releases/channels/images/project-repository.png) ::: #### Glob patterns in Git protection rules {#git-rules-glob-patterns} Branch and tag patterns used in Git protection rules support glob patterns and can include the following wildcard characters: | **Character** | **Description** | **Example** | | --- | --- | --- | | `*` | Matches multiple characters except `/` | Branch pattern of `release/*` will match branch `release/1.0.0` but not `release/1.0.0/hotfix1` | | `**` | Matches multiple characters including `/` | Branch pattern of `release/**` will match branch `release/1.0.0` and `release/1.0.0/hotfix1` | | `?` | Matches a single character | Tag pattern of `v?` will match a tag of `v1` but not `v1.0.0` | | `[0-9]` | Matches a single character in the range | Tag pattern of `v[0-9].[0-9].[0-9]` will match a tag `v1.0.0` | | `[abc]` | Matches a single character from the set | Branch pattern of `release/[abc]*` will match branch `release/a-new-branch` but not `release/my-new-branch` | #### Advanced patterns Some Git providers support Git references outside of branches and tags. For example when a pull request is created in a GitHub repository, a merge branch will be created with a Git reference of `refs/pull/{id}/merge`, containing the merged code between the source and target branches of the pull request. To target these references in Git protection rules, you can click the **Advanced** button for project repository and external repository rules and enter advanced patterns to match on. These patterns must be fully-qualified, any existing branches or tags that were entered will be fully-qualified for you. If the patterns entered in advanced section only contain branches or tags, then you can click the **Basic** button to return to entering branches and tags without needing to fully-qualify these. Some examples: | **Type** | **Basic pattern** | **Fully-qualified pattern** | | --- | --- | --- | | Branch | `main` | `refs/heads/main` | | Tag | `v[0-9]` | `refs/tags/v[0-9]` | | GitHub pull request | N/A | `refs/pull/*/merge` | :::figure ![Advanced patterns example](/docs/img/releases/channels/images/project-repository.png) ::: ## Custom fields {#custom-fields} Channels allow you to define which custom fields are required when creating a release within the channel, ensuring you can use them within scripts and steps in the deployment process. A maximum of 10 custom fields can be defined on a channel. :::div{.hint} Support for custom fields in releases is rolling out to Octopus Cloud in Early Access Preview. ::: 1. When viewing a channel, click **Add Custom Field** in the Custom Fields section. 2. Enter a name and description for the field. 3. Click **Save**. :::figure ![Screenshot of editing custom fields for a channel showing a custom field for a Pull Request Number](/docs/img/releases/channels/images/channel-custom-fields.png) ::: ## Using channels {#using-channels} Once a project has more than one channel, there a number of places they may be used. ### Controlling deployment lifecycle {#control-deployment-lifecycle} Each channel defines which [lifecycle](/docs/releases/lifecycles) to use when promoting releases between environments. You can choose a lifecycle for each channel, or use the default lifecycle defined by the project. For instance, when you ship pre-release software to your early access users, you can use an early access (or beta) channel which uses a lifecycle that deploys the software to an environment your early access users have access to. :::figure ![Channel lifecycle](/docs/img/releases/channels/images/channel-lifecycle.png) ::: ### Modifying deployment process {#modify-deployment-process} Deployment steps can be restricted to only run on specific channels. For instance, you might decide you'd like to notify your early access users by email when an update version of the software is available. This can be achieved by adding an email step to your deployment process and scoping the step to the early access channel. That way the step will only run when a release is deployed to the early access channel and your early access users will only receive emails about relevant releases. :::figure ![Step channel condition](/docs/img/releases/channels/images/step-channel-condition.png) ::: ### Variables {#variables} As you release software to different channels, it's likely that some of the variables in those channels will need to be different. [Variables](/docs/projects/variables) can be scoped to specific channels. :::figure ![Variable channel scope](/docs/img/releases/channels/images/variable-channel-scope.png) ::: ### Deploying to tenants {#deploy-to-tenants} You can control which releases will be deployed to certain tenants using channels. You can configure this under the **Tenants** section of a channel. In this example, releases in this channel will only be deployed to tenants tagged with `Early access program/2.x Beta`. :::figure ![Channel tenants](/docs/img/releases/channels/images/channel-tenants.png) ::: ## Creating releases Every release in Octopus Deploy must be placed into a channel. Wherever possible Octopus will choose the best possible channel for your release, or you can manually select a channel for your release. ### Manually creating releases {#manually-create-release} When you are creating a release, you can select a channel. :::figure ![Channel release](/docs/img/releases/channels/images/channel-release.png) ::: Selecting the channel will cause the release to use the lifecycle associated with the channel (or the project default, if the channel does not have a lifecycle). It will also cause the deployment process and variables to be modified as specified above. The package list allows you to select the version of each package involved in the deployment. The *latest* column displays the latest packages that match the version rules defined for the channel (see [version rules](#version-rules) for more information). ### Using build server extensions or the Octopus CLI When using one of the [build server extensions](/docs/octopus-rest-api/) or the [Octopus CLI](/docs/octopus-rest-api/octopus-cli/create-release) to create releases, you can either let Octopus automatically choose the correct channel for your release (this is the default behavior), or choose a specific channel yourself. ### Built-in package repository triggers When adding a [built-in package repository trigger](/docs/projects/project-triggers/built-in-package-repository-triggers) to your project, you are required to select a channel (if the project has more than one). Any releases created automatically will use the configured channel. Additionally, any version rules configured for the channel will be used to decide whether a release is automatically created. In the following example, if version 3.1.0 of OctoFX is pushed to the built-in repository, no release will be created as the package version does not meet the version rule of the channel. :::figure ![Channel package version rule](/docs/img/releases/channels/images/channel-package-version-rule.png) ::: ## Discrete channel releases {#discrete-channel-releases} The scenarios channels are used to model can be split into two categories. In the first, the channel controls the way releases are deployed (different lifecycles, deployment steps, etc), but the deployed releases should not be treated differently. An example of this would be a *Hotfix* channel, used to select a lifecycle designed to releases to production quickly. In the second mode of use, releases deployed via different channels are different, and should be treated as such. As an example of this, imagine a company that makes a deployment tool available as both a downloadable self-hosted product and a cloud-hosted software-as-a-service product. In this example, the `self-hosted` and `cloud` channels not only select different lifecycles and deployment steps, but it is also desirable to view them as individual versions on the dashboard. In **Project Settings** there's an option named *Discrete Channel Releases*, designed to model this scenario. :::figure ![Discrete channel releases project setting](/docs/img/releases/channels/images/discrete-channel-release.png) ::: Setting this to `Treat independently from other channels` will cause: - Versions for each channel to be displayed on the dashboard - Each channel to be treated independently when applying release [retention policies](/docs/administration/retention-policies) The image below shows an example dashboard with discrete channel release enabled: ![Discrete channel releases on dashboard](/docs/img/releases/channels/images/discrete-channels-dashboard.png) ## Removing channels For projects using Config as Code, it's up to you to take care to avoid deleting any channels required by your deployments. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. # Guided failures Source: https://octopus.com/docs/releases/guided-failures.md When deployments encounter errors, they will typically fail. However, the **guided failure** mode provides an option to prompt a user to intervene when a deployment encounters an error so that the deployment can continue. With guided failure mode enabled, the user can fail the process, and retry or ignore any steps that failed the first time. ## Enable guided failure mode for an environment Guided failure mode can be enabled per environment. When enabled for an environment, if a deployment encounters an error, Octopus will prompt a user to intervene. 1. Navigate to **Infrastructure ➜ Environments**. 1. Click the ... overflow menu for the specific environment you want to enable guided failure on and select *Edit*. 1. Expand the **Default Guided Failure Mode** section and tick the check-box to enable the feature. 1. Click **SAVE**. Note, you can still override this setting for individual deployments. ## Enabling guided failure mode for a project {#Guidedfailures-Enablingguidedfailuremode} By default, projects inherit their guided failure mode settings from the environments they are deploying to. This allows you to use guided failure mode for some environments but not others within the same project. For instance, if the test environment has guided failure mode disabled, but the production environment has guided failure mode enabled, errors encountered during deployment to the test environment will result in a failed deployment, whereas errors encountered during deployment to the production environment will prompt a user for instructions before failing. To override the guided failure settings of the environments being deployed to and set a project level guided failure mode: 1. Navigate to the project's overview page, and select **Settings**. 1. Expand the **Default failure mode** section. 1. select the mode you want to use. Click **SAVE**. ## Responding to a guide failure {#Guidedfailures-Whathappens} If something goes wrong during the deployment, Octopus will interrupt the deployment, and request guidance for how to handle the failure. 1. When a deployment encounters an error, Octopus will interrupt the deployment and wait for manual intervention. 1. A user with the correct [permissions](/docs/security/users-and-teams/user-roles) can claim the manual intervention by clicking **ASSIGN TO ME**. 1. Next, the user can choose between the following options: - **FAIL**: mark the deployment as failed, don't try anything else. - **RETRY**: retry the step where the error occurred. - **IGNORE**: skip the operation, but keep going with the deployment. - **EXCLUDE MACHINE FROM DEPLOYMENTS**: exclude the deployment target from the rest of the deployment and proceed. :::div{.success} Guided failure mode uses the same [user experience that is used for manual steps](/docs/projects/built-in-step-templates/manual-intervention-and-approvals/) (internally, requests for failure guidance, and manual steps, use the same implementation: we call them Interruptions in the [REST API](/docs/octopus-rest-api)). ::: Note: If a process step is set to [required](/docs/projects/steps/conditions/#required) then you will not see the manual intervention options like "IGNORE" and "EXCLUDE MACHINE FROM DEPLOYMENTS". # Running a runbook Source: https://octopus.com/docs/runbooks/running-a-runbook.md ## How to run a runbook in Octopus Deploy 1. Navigate to your project and select **Runbooks**. 1. Select the runbook you want to run. 1. Click **RUN...**. :::figure ![run runbook basic options](/docs/img/getting-started/first-runbook-run/images/run-runbook-basic-options.png) ::: 1. Select one more more environments for the execution. 1. Click **RUN** to run now, or select **Show advanced** to display advanced configuration options. ### Schedule a runbook run 1. After expanding the advanced options on your runbook run, expand the **WHEN** section and select **later**. 1. Specify the time and date you would like the runbook to run. 1. Click **RUN**. ### Exclude steps from runbook runs 1. After expanding the advanced options on your runbook run, expand the **Excluded steps** section and use the check-box to select steps to exclude from the runbook run. 1. Click **RUN**. ### Modify the guided failure mode Guided failure mode asks for users to intervene when a runbook encounters an error. Learn more about [guided failures](/docs/releases/guided-failures). 1. After expanding the advanced options on your runbook run, expand the **Failure mode** section, and select the mode you want to use. 1. Click **RUN**. ### Run on a specific subset of deployment targets You can run a runbook on a specific subset of deployment targets. 1. After expanding the advanced options on your runbook run, expand the **Preview and customize** section. 1. Expand the **Deployment Targets** section. 1. Select your target selection method: - **Include all applicable deployment targets** (default) - **Include specific deployment targets**: Choose individual targets to include - **Exclude specific deployment targets**: Choose individual targets to exclude - **Include specific target tags**: Include targets with selected tags - **Exclude specific target tags**: Exclude targets with selected tags 1. Click **RUN**. ## Further reading - [Runbooks vs Deployments](/docs/runbooks/runbooks-vs-deployments) - Understand the key differences - [Runbook Variables](/docs/runbooks/runbook-variables) - Learn about variable management - [Runbook Permissions](/docs/runbooks/runbook-permissions) - Configure access control - [Tag Sets](/docs/tenants/tag-sets) - Learn about creating and managing tags # Google Workspace authentication Source: https://octopus.com/docs/security/authentication/googleapps-authentication.md To use Google Workspace authentication with Octopus Server, Google Workspace must be configured to trust Octopus by setting it up as an app. This section covers the details of configuring the app. ## Configure Google Workspace ### Configure an app To configure an app within Google Workspace, you must have a Developer account at [https://developers.google.com](https://developers.google.com). This account will own the app configuration, so we recommend you create an account for company use, rather than using an individual account. Once you have an account, log in to [https://console.developers.google.com](https://console.developers.google.com) and the following actions: 1. Create a project for Octopus (this might take a minute or so) and then within that project 2. Under the **APIs and services**, select **Credentials**. 3. Click the **Configure consent screen** button. 4. Select the User Type **Internal** and click **Create**. 5. Fill out the **App information**, including a descriptive **App name** such as Octopus Server or Octopus Cloud, and select an appropriate **User support email**. 6. Fill out the **App logo** details and upload a logo to make it easy to identify the application. You can download the Octopus logo [here](https://octopus.com/images/company/Logo-Blue_140px_rgb.png). 7. Fill out the **App domain** information, providing `https://octopus.com` as the **Application home page**, `https://octopus.com/privacy` as the **Application privacy policy link** and `https://octopus.com/legal/customer-agreement` as the **Application Terms of Service link**. Add the Top Level Domain of your Octopus instance to the **Authorized domains** list. If you are setting Google Workspaces up for **Octopus Cloud** this will be `octopus.app` and `octopus.com`. 8. Fill out the **Developer contact information**. 9. Click **Save and continue**. 10. On the **Scopes** screen, click **Save and continue**. 11. Click **Back to dashboard** 12. Select **Credentials** tab and click **Create credentials**, selecting **Create Oauth client ID**. 13. Under **Application type**, select `Web application`, In the **Name** field enter `Octopus`, click **Add URI**, and enter `https://octopus.example.com/api/users/authenticatedToken/GoogleApps` (replacing `https://octopus.example.com` with the URL of your Octopus Server) to the **Authorized redirect URIs** and click **Create**. 14. Enter a **Name** for identification, e.g. Octopus. This is the name that will appear when the user is asked to allow access to their details. 15. Take note of the **Client ID** and **Client secret** from the `OAuth client created` modal. :::div{.hint} **Tips:** - **Reply URLs are case-sensitive** - Be aware that the path in this URL after the domain name was **case-sensitive** during our testing. - **Not using SSL?** We highly recommend using SSL, but we know it's not always possible. If you do not have SSL enabled on your Octopus Server, you can use `http`. Please beware of the security implications of accepting a security token over an insecure channel. Octopus integrates with [Let's Encrypt](/docs/security/exposing-octopus/lets-encrypt-integration), making it easier to set up SSL on your Octopus Server. ::: ## Configure Octopus Server You can configure the Google Workspace settings from the command line. You will need the **Client ID** and **Client secret** from the Credentials tab and your **hosted domain name**. :::div{.hint} Support for OAuth code flow with PKCE was introduced in **Octopus 2022.2.4498**. If you are using a version older than this, the **Client secret** setting is not required. ::: Once you have those values, run the following from a command prompt in the folder where you installed Octopus Server: ```powershell Octopus.Server.exe configure --googleAppsIsEnabled=true --googleAppsClientId=ClientID --googleAppsClientSecret=ClientSecret --googleAppsHostedDomain=your-domain.com ``` Alternatively these settings can be defined through the user interface by selecting **Configuration ➜ Settings ➜ GoogleApps** and populating the fields `Is Enabled`, `Hosted Domain`, `Client ID` and `Client Secret`. :::figure ![Settings](/docs/img/security/authentication/images/google.png) ::: ### Octopus user accounts are still required Even if you are using an external identity provider, Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a **matching Identifier**. If there is no match, Octopus looks for a user account with a **matching Email Address**. If a user account is found, the External Identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Troubleshooting We do our best to log warnings to your Octopus Server log whenever possible. If you are having difficulty configuring Octopus to authenticate with Google Workspace, be sure to check your [server logs](/docs/support/log-files) for warnings. ### Double and triple-check your configuration Unfortunately, security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember things are case-sensitive. - Remember to remove or add slash characters as we've instructed - they matter too! ### Check OpenID Connect metadata is working You can see the OpenID Connect metadata by going to [https://accounts.google.com/.well-known/openid-configuration](https://accounts.google.com/.well-known/openid-configuration). ### Inspect the contents of the security token Perhaps the contents of the security token sent back by Google Workspace aren't exactly the way Octopus expected, especially certain claims that may be missing or named differently. This will usually result in the Google Workspace user incorrectly mapping to a different Octopus User than expected. The best way to diagnose this is to inspect the JSON Web Token (JWT) which is sent from Google Workspace to Octopus via your browser. To inspect the contents of your security token: 1. Open your browser's Developer Tools and enable Network logging, making sure the network logging is preserved across requests. 2. In Chrome Dev Tools this is called "Preserve Log": :::figure ![Preserve Log Checkbox](/docs/img/security/authentication/images/5866122.png) ::: 3. Attempt to sign into Octopus using Google Workspace and find the HTTP POST coming back to your Octopus instance from Google Workspace on a route like `/api/users/authenticatedToken/GoogleApps`. You should see an `id_token` field in the HTTP POST body. :::figure ![ID Token](/docs/img/security/authentication/images/5866125.png) ::: 4. Grab the contents of the `id_token` field and paste that into [https://jwt.io/](https://jwt.io/) which will decode the token for you. :::figure ![jwt.io](/docs/img/security/authentication/images/5866123.png) ::: 5. Don't worry if jwt.io complains about the token signature, it doesn't support RS256 which is used by Google Workspace. 6. Octopus uses most of the data to validate the token, but it primarily uses the `sub`, `email`, and `name` claims. If these claims are not present, you will likely see unexpected behavior. 7. If you are not able to figure out what is going wrong, please send a copy of the decoded payload to our [support team](https://octopus.com/support) and let them know what behavior you are experiencing. # OpenTelemetry trace files Source: https://octopus.com/docs/support/opentelemetry-trace-files.md Octopus Server records [OpenTelemetry](https://opentelemetry.io/) (OTEL) traces that capture internal operations like HTTP requests, task execution, and more. These traces are saved as `.tar` files to disk and can be sent to Octopus support to help diagnose issues. :::div{.hint} **OpenTelemetry trace files are only available for self-hosted instances of Octopus Server.** This feature is available from 2026.1. It is disabled by default and can be enabled at **Configuration ➜ Diagnostics**. ::: ## Permissions Viewing and changing the OpenTelemetry trace files configuration, and downloading or deleting trace files, requires the [`AdministerSystem`](/docs/security/users-and-teams/default-permissions) permission. ## Enabling and configuring 1. Navigate to **Configuration ➜ Diagnostics**. 2. Under **Server Traces**, click `Configure`. 3. Toggle **Enabled**. 4. Optionally configure **Max storage size** and **Retention days**. 5. Click `Save`. :::figure ![OpenTelemetry trace files configuration page](/docs/img/support/images/trace-file-export-configuration.png) ::: :::div{.hint} Configuration changes take effect within about 1 minute as the server syncs settings in the background. ::: ## What traces contain Traces are made up of *spans*, each representing a unit of work performed by the server. The spans captured include: - **HTTP requests** - inbound and outbound HTTP requests with timing and status information. - **Task execution** - deployments, runbook runs, and other server tasks. - **Internal operations** - other server-side work that provides context when diagnosing problems. Each span includes attributes such as timing, status codes, and contextual metadata that help Octopus support engineers understand what the server was doing and where time was spent. Trace data is stored in OTLP JSON format, which is compatible with the open source [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) after decompression. ## File location Trace files are stored under the `Telemetry/OpenTelemetry/traces/` subdirectory of the server's **ClusterSharedDirectory** as `.tar` files. ClusterSharedDirectory is the shared storage location Octopus uses for logs and diagnostics data - its path can be found at [**Configuration ➜ Settings ➜ Server Folders**](/docs/administration/managing-infrastructure/server-configuration-and-file-storage/#server-folders) (or configured via the [`path` command](/docs/octopus-rest-api/octopus.server.exe-command-line/path)), and defaults to the server's Home Directory if not explicitly set. In a [High Availability (HA)](/docs/administration/high-availability) cluster, all nodes write to this shared path, so trace files from every node are available in one location. ## Retention and disk usage Traces are written to disk in 50 MB chunks. The configurable size limit controls how many of these chunk files will be written. When the total size approaches the configured size limit, the oldest trace files are deleted first to make room. The minimum configurable storage size is **250 MB**. Retention by age is **unlimited by default**, but you can configure a maximum number of days to keep trace files. Size-based and time-based retention work together - files are removed when they exceed either limit. If a node crashes or restarts mid-write, any `.tar.inprogress` files left behind are automatically recovered after 10 minutes of inactivity and treated as completed trace files. ## Downloading trace files Trace files can be downloaded directly from the Diagnostics page: 1. Navigate to **Configuration ➜ Diagnostics**. 2. Under **Server Traces**, click `Download` to save all current trace data as a single Zstandard compressed JSONL file. :::figure ![Download button for OpenTelemetry trace files on the Diagnostics page](/docs/img/support/images/download-trace-files.png) ::: :::div{.hint} Downloading is available even while trace export is actively running - the download reads safely alongside the active writer. ::: ## Deleting trace files To free disk space without waiting for retention limits to apply: 1. Navigate to **Configuration ➜ Diagnostics**. 2. Under **Server Traces**, click `Configure`. 3. Disable the feature and click `Save`. 4. Click the `Clear trace files` button. ## Sending trace files to support If Octopus support needs your trace files to help diagnose an issue, they will guide you through how to provide them. # octopus account generic-oidc Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-generic-oidc.md Manage Generic OpenID Connect accounts in Octopus Deploy ```text Usage: octopus account generic-oidc [command] Available Commands: create Create an Generic OpenID Connect account help Help about any command list List Generic OpenID Connect accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account generic-oidc [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account generic-oidc list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Git repository triggers Source: https://octopus.com/docs/projects/project-triggers/git-triggers.md Git repository triggers allow you to automatically create a new release when a new commit is pushed to a Git repository. ## Getting started Navigate to your project and click **Triggers**. Click **Add Trigger** on the right-hand side of the page, and select **Git repository**. Enter a name and description for your trigger. ## Channels and lifecycles :::div{.hint} Git repository triggers create releases based on the default branch in version controlled projects ::: If your project contains multiple [channels](/docs/releases/channels), you have the option of selecting which channel this trigger will apply to. The releases created by the trigger will use this channel. The versions used for those releases is guided by [release versioning](/docs/releases/release-versioning) under **Settings**. They will use the rules defined there. A preview of the [lifecycle](/docs/releases/lifecycles) used by the selected channel is displayed. By clicking the link you can modify the [lifecycle's phases](/docs/releases/lifecycles/#Lifecycles-LifecyclePhases) to have a release created and deployed to selected environments whenever a new commit is pushed to the monitored repository. ## Trigger sources Git repositories referenced in your project's deployment process can be selected to be monitored by the trigger to create releases. Please note that for [configuration as code](/docs/projects/version-control/config-as-code-reference) projects, only steps that reference Git repositories in the deployment process from the **default branch** are able to be referenced. Any changes to the deployment process in other branches will not be available for use in git triggers. :::figure ![Repository selection](/docs/img/projects/project-triggers/images/git-triggers/git-triggers-repository-selection.png) ::: ### File path filters When selecting a repository to monitor you will be provided with the option to add file path filters. These filters allow you to specify file paths to include or exclude from the monitoring of new commits. :::figure ![File path filters](/docs/img/projects/project-triggers/images/git-triggers/git-triggers-file-path-filters.png) ::: - If no file path filters are specified, all commits to the monitored repository will trigger the creation of a new release. - If file paths are set to be included, only changes to those file paths will be monitored, all other file paths will be excluded. - If file paths are set to be excluded, changes to those file paths will not be monitored, all other file paths will be included. The file path filters support glob patterns and can include the following wildcard characters: | **Character** | **Description** | **Example** | |---------------|-------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------| | `*` | Matches multiple characters except `/` | File path pattern of `source/*` will match the file path `source/data` but not `source/data/pages` | | `**` | Matches multiple characters including `/` | File path pattern of `source/**` will match the file paths `source/data` and `source/data/pages` | | `?` | Matches a single character | File path pattern of `api/v?` will match a file path of `api/v1` but not `api/v1.1` | | `[0-9]` | Matches a single character in the range | File path pattern of `source/docs/version/[0-9]` will match the file path `source/docs/version/1` but not `source/docs/version/10` | | `[abc]` | Matches a single character from the set | File path pattern of `docs/[abc]*` will match the file path `docs/credits` but not `docs/references` | ## History The history section contains information about the last time the trigger was evaluated and the last release that was created by the trigger. Triggers are evaluated every three minutes and the results will be reported here. - Outcome: Was any action taken, or if there was an error during processing. - Reason: Additional information about the outcome. - Last executed at: The time the task was run. - Discovered commits: The branch and commit hash that were found in this execution. If the trigger has created a release, a link to the created release will be shown alongside the date it was created. :::figure ![History](/docs/img/projects/project-triggers/images/git-triggers/git-triggers-history.png) ::: # Send a secret to Octopus Source: https://octopus.com/docs/support/send-a-secret-to-octopus.md Sometimes you may need to send sensitive information to Octopus support, such as a Master Key. In order to do so, place your secret in the following PowerShell script and it will encrypt it for Octopus eyes only. ```powershell $yourSecret = "Hello Octopus!" # Place your secret here $octopusPublicKey = "MIIDnzCCAwigAwIBAgIJAK5yFHmnxrYxMA0GCSqGSIb3DQEBBQUAMIGSMQswCQYDVQQGEwJBVTEMMAoGA1UECBMDUUxEMREwDwYDVQQHEwhCcmlzYmFuZTEhMB8GA1UEChMYT2N0b3B1cyBEZXBsb3kgUHR5LiBMdGQuMRcwFQYDVQQDEw5PY3RvcHVzIERlcGxveTEmMCQGCSqGSIb3DQEJARYXaGVsbG9Ab 2N0b3B1c2RlcGxveS5jb20wHhcNMTQwNzI1MTE0NzI2WhcNMzIxMDA4MTE0NzI2WjCBkjELMAkGA1UEBhMCQVUxDDAKBgNVBAgTA1FMRDERMA8GA1UEBxMIQnJpc2JhbmUxITAfBgNVBAoTGE9jdG9wdXMgRGVwbG95IFB0eS4gTHRkLjEXMBUGA1UEAxMOT2N0b3B1cyBEZXBsb3kxJjAkBgkqhkiG9w0BCQ EWF2hlbGxvQG9jdG9wdXNkZXBsb3kuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDD532q7wcbDAE65sZn5kdWQEv+yFHTUn9wPXEfPztv1cc/xjLts6zuKcfcRVITyB+n02Rg/VAGpNdZeAIWTtptKLkcdttwf+xoySPF13jc7DSnYabGamRR/hqzn9QcLq87WHIQF8olecpokoTsdBfE6e3idR8 hLKKIlJgb5g5dcwIDAQABo4H6MIH3MB0GA1UdDgQWBBRYd4/ytF84FZVaSVHfhPb0Z/EYZzCBxwYDVR0jBIG/MIG8gBRYd4/ytF84FZVaSVHfhPb0Z/EYZ6GBmKSBlTCBkjELMAkGA1UEBhMCQVUxDDAKBgNVBAgTA1FMRDERMA8GA1UEBxMIQnJpc2JhbmUxITAfBgNVBAoTGE9jdG9wdXMgRGVwbG95IFB0 eS4gTHRkLjEXMBUGA1UEAxMOT2N0b3B1cyBEZXBsb3kxJjAkBgkqhkiG9w0BCQEWF2hlbGxvQG9jdG9wdXNkZXBsb3kuY29tggkArnIUeafGtjEwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQAcEMAykQaazLd2ZewE7d+0PeIWv/YlZMIDeg5LF1/UtKMMCaaspN7rNA1lUPfjK/ofWh43s4R0J tjlbuEtZr+HKmOGzr+wbMCRIggbu2j3GEcC5i7zeoa85olokubwO1QDVZVaELWyXnDZl1UoJ9VyGsV5pEAE571XS9oTUyUssQ==" function Encrypt-ForOctopusEyesOnly($secretMessage) { $certBytes = [System.Convert]::FromBase64String($octopusPublicKey) $x = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList @(,$certBytes) $publicKey = $x.PublicKey.Key; $plainBytes = [System.Text.Encoding]::UTF8.GetBytes($secretMessage) $encryptedBytes = $publicKey.Encrypt($plainBytes, $false); $encryptedText = [System.Convert]::ToBase64String($encryptedBytes) return $encryptedText } $encryptedSecret = Encrypt-ForOctopusEyesOnly $yourSecret Write-Host $encryptedSecret ``` # octopus account generic-oidc create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-generic-oidc-create.md Create an Generic OpenID Connect account in Octopus Deploy ```text Usage: octopus account generic-oidc create [flags] Aliases: create, new Flags: -d, -- string A summary explaining the use of the account to other users. --audience string The audience claim for the federated credentials. Defaults to api://default -D, --description-file file Read the description from file -e, --environment stringArray The environments that are allowed to use this account -E, --execution-subject-keys stringArray The subject keys used for a deployment or runbook -n, --name string A short, memorable, unique name for this account. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account generic-oidc create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Prevent release progression Source: https://octopus.com/docs/releases/prevent-release-progression.md Sometimes you may need to block your project from deploying a specific release. This is done by preventing the progression of the release. Preventing progression can be useful if you need to temporarily block the release, or if you need to fix some variables before proceeding. This also allows you to keep releases without deleting them, so they are still available for auditing purposes. These basic rules are applied when a release is prevented from progressing: - If a phase has *no successful* deployments then *no* deployments to that phase can take place. - If a phase has *only failed* deployments, then *no* deployments to that phase can take place. - If a phase has *a successful* deployment, then deployments to any environment in that phase *can* take place. - The first phase can always be deployed to even if the release is blocked before any deployment has taken place. - Optional phases are treated like any other phase and so the above rules apply, even if the previous phase is complete. - The above rules apply to each Tenant individually with respect to the relevant phase they have reached. Essentially, a blocked release is about blocking progression to yet to be deployed phases, not about deploying to phases you have already started deploying to. This allows you to, for example, block deployments to the production phase due to a problem uncovered in UAT-1, while still deploying to UAT-2 for further analysis. ## Block deployment You can block a release of a project from being used in any future deployments, no matter which phase the release is currently on. This can be done from the release page of the project you're wishing to block: :::figure ![](/docs/img/releases/images/5865856.png) ::: Select the option to **Prevent Progression**: :::figure ![](/docs/img/releases/images/5865857.png) ::: Provide a reason, so your team is aware and on the same page, and hit **Prevent Progression**: :::figure ![](/docs/img/releases/images/5865858.png) ::: ## Resolve and unblock When you're happy for the deployment process continuing, go back to the release page of the project, and select "**Unblock**": :::figure ![](/docs/img/releases/images/5865859.png) ::: ## Permissions There are two permissions required for the act of preventing progression and unblocking your deployments, which you have to assign to the user performing the task: - **DefectReport**: Allows a user to block a release from progressing to the next lifecycle phase. - **DefectResolve**: Allows a user to unblock a release so it can progress to the next phase. :::div{.hint} **What is a defect?** When you block a release from being deployed, we actually use the Octopus API to create a "Defect" for that release with the reason you provided for blocking future Deployments from using that release. ::: ## Learn more - [Managing roles and permissions](/docs/security/users-and-teams/user-roles). # Capture and export a HAR file Source: https://octopus.com/docs/support/how-to-capture-and-export-har-file.md When something goes wrong in Octopus we may ask you to provide a HAR file to help us diagnose the problem. This file captures a web browser's interaction with a site and may provide insight into where and why a specific request is failing. :::div{.warning} HAR files may contain sensitive data as they include any pages you visited and associated cookies while recording. Depending on what has been recorded, this may allow someone with your HAR file to impersonate accounts or any other personal information submitted during recording. ::: The following instructions are provided for each browser, starting from the Octopus web portal: - [Chrome](#chrome) - [Firefox](#firefox) - [Safari](#safari) - [Edge](#edge) ## Chrome {#chrome} 1. Press F12 to open developer tools. 2. Click the **Network** tab in the panel after it loads. 3. Click the **Record** (round gray button) in the upper left of the tab, it should turn red to indicate the session is being recorded. 4. Check the **Preserve log** box. 5. Click the **Clear** button (circle with a slash through it) to remove any existing network recordings in the current session. 6. Follow the same steps you did initially to reproduce the issue or unexpected behavior. 7. Once you have completed the reproduction steps, click **Download**. 8. Save the file to your computer, selecting **Save as HAR with Content**. ## Firefox {#firefox} 1. Press F12 to open developer tools. 2. Click the **Network** tab in the panel after it loads. 3. Click the **Clear** button (trash bin in upper left) to remove any undesired existing network recording in the current session. Recording should start automatically. 4. Follow the same steps you did initially to reproduce the issue or unexpected behavior. 5. Right-click anywhere under the **File** column and select **Save all as HAR**, saving the file to your computer. ## Safari {#safari} 1. Click on the **Develop** menu and select **Show Web Inspector**. 2. Click the **Network** tab. Recording should start automatically. 3. Follow the same steps you did initially to reproduce the issue or unexpected behavior. 4. Click **Export** on the far right of the Network tab, saving the file to your computer. ## Edge {#edge} 1. Press F12 to open developer tools. 2. Click the **Network** tab in the panel after it loads. Recording should start automatically. 3. Follow the same steps you did initially to reproduce the issue or unexpected behavior. 4. Click **Export HAR** (down arrow in upper right), saving the file to your computer. Once you have exported the HAR file, we'll provide you with a secure link to send it to us. # Delta compression for package transfers Source: https://octopus.com/docs/deployments/packages/delta-compression-for-package-transfers.md Octopus supports delta compression for package transfer using our [delta compression library](https://github.com/OctopusDeploy/Octodiff). Delta compression will speed up the package acquisition phase of your deployments, especially when the limiting factor is transfer bandwidth, or if the delta achieves significant size reduction. :::div{.info} Delta compression is not available when a package is downloaded directly on your machine(s). Delta compression is used by default when a package is uploaded from the Built-in repository to the remote machine(s). Delta compression is used by default when a package is downloaded from an external feed to the Octopus Server's package cache and then uploaded to the remote machine(s). ::: A typical scenario in Octopus Deploy is frequent deployments of small changes to packages. For example, you might push some code updates but all of your libraries are unchanged. In the past Octopus Deploy would upload the entire package to each machine, regardless of how little had changed. With the introduction of delta compression, only the changes to your package will be uploaded to each machine. ## A package deployment in Octopus Deploy now looks something like this 1. Identify all versions of the package available on the target machine by calling [Calamari](https://octopus.com/blog/calamari). 2. Calamari then attempts to match these packages with packages available on the Octopus Server. If the PackageId, Version and file hash are identical then create a signature file for the package. 3. Build the delta file between the previous package and the package being transferred from the Octopus Server. 4. If the delta file meets the size criteria (see note below), the server will upload the delta file to the Tentacle and call Calamari to apply the delta file to the transferred package. N.B. If any of [these issues](#delta-gone-wrong) are experienced during the creation or application of the delta, then the entire package will be uploaded to the tentacle. 5. Once the delta is applied, the signature file from step 2 will be compared with the final applied package on the target to determine if the change was successful. If the signatures don't match, calamari will request the server to re-upload the entire package. :::div{.info} **Delta file size** If the final size of the delta file is within 80% of the new package, we upload the full package instead of uploading and applying the delta file as this will save on server resources as applying the delta file can be quite resource heavy with big delta files. ::: ## What if something goes wrong? {#delta-gone-wrong} If any of the below occurs the full package will be uploaded: 1. The signature file fails to create. 2. The delta file fails to create. 3. Applying the delta fails. 4. The package details (size and file hash) don't match after applying the delta. ## Running a deployment that generates a delta file When running a deployment that creates and applies a delta file, you will see the following in the logs under the `Acquire packages` section :::figure ![Package Logs](/docs/img/deployments/packages/images/package-logs.png) ::: :::div{.hint} **Delta progress logging** As can be seen in the screenshot above, the logging of the progress of applying the delta doesn't look like it's successfully completed as it's reporting only 20% and 0%(!!). Don't worry though, this is due to a problem with how our delta compression library (Octodiff) reports progress (we will be fixing this logging issue) and it's actually applying the full delta. ::: ## Optimizing delta compression The best way to guarantee the best size reduction is to use the tools provided by Octopus Deploy when creating your packages. All of our packaging tools are automatically tested to ensure the package contents are bundled in a delta-compression-friendly format. If you want to use your own tools, that is fine! Just be sure to test the performance of delta compression to make sure you have configured everything correctly. :::div{.hint} The most common mistake causing delta compression to yield minimal size reduction is when artificial differences are injected into the package file. One example is when timestamps are changed each time the package is built. The tools provided by Octopus Deploy are designed to yield high size reductions based on the actual content of your packaged files. ::: ## Turning delta compression off To turn this feature off, create a project [variable](/docs/projects/variables) named **Octopus.Acquire.DeltaCompressionEnabled** with a value of **False**. **Delta calculations can be CPU intensive** You should consider disabling delta compression if package transfer bandwidth is not a limiting factor (all the machines are in the same network segment), or if the CPU on the Octopus Server is pegged at 100% during your deployments. **Are you really benefiting from delta compression?** The deployment logs will tell you the % saving delta compression is achieving. If you are constantly transferring 50% or more of the original package, perhaps delta compression is actually becoming a bottleneck and should be disabled. # octopus account generic-oidc list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-generic-oidc-list.md List Generic OpenID Connect accounts in Octopus Deploy ```text Usage: octopus account generic-oidc list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account generic-oidc list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Variable filters Source: https://octopus.com/docs/projects/variables/variable-filters.md By default, bindings are inserted into the output as-is; no consideration is given as to whether the target variable or file is XML, HTML, JSON etc. That is, the target file type is always treated as plain text. Octopus variable substitutions support *filters* to correctly encode values for a variety of target file types. These are invoked using the `|` (pipe) operator. Given the variable: | Name | Value | Scope | | ------------- | --------- | ----- | | `ProjectName` | `You & I` | | And the template: ```html

#{ProjectName | HtmlEscape}

``` The result will be: ```html

You & I

``` That is, the ampersand has been encoded correctly for use in an HTML document. :::div{.problem} The filters provided by Octopus are for use with trusted input; don't rely on them to sanitize data from potentially malicious sources. ::: ## Provided filters Octopus provides a number of different types of filters for variable values: - [Core filters](#core-filters) - [Comparison filters](#comparison-filters) - [Conversion filters](#conversion-filters) - [Date filters](#date-filters) - [Escaping filters](#escaping-filters) - [Extraction filters](#extraction-filters) ## Core filters {#core-filters} These core filters perform common string operations. | Name | Purpose | Example input | Example output | |---------------------------|--------------------------------------------|------------------------|------------------------| | [`Format`](#format) | Applies a format | `4.3` | `$4.30` | | [`Replace`](#replace) | Replaces a pattern | `1;2;3` | `1, 2, 3` | | `ToLower` | Forces values to lowercase | `Automated Deployment` | `automated deployment` | | `ToUpper` | Forces values to uppercase | `Automated Deployment` | `AUTOMATED DEPLOYMENT` | | [`Trim`](#trim) | Removes whitespace from the start/end | `···Bar···` | `Bar` | | [`Truncate`](#truncate) | Limits the length of values | `Octopus Deploy` | `Octopus...` | | [`Substring`](#substring) | Extracts a range of characters by position | `Octopus Deploy` | `Deploy` | ### Format The *Format* filter allows for converting of input based on an additionally provided argument that is passed to the *`.ToString()`* method. | MyVar Value | Filter Expression | Output | | ------------------------------- | ------------------------------------- | ----------------- | | `4.3` | `#{MyVar \| Format C}` | $4.30 | | `2030/05/22 09:05:00` | `#{MyVar \| Format yyyy}` | 2030 | | | `#{ \| NowDate \| Format Date MMM}` | Nov | | `#{Octopus.Deployment.Created}` | `#{MyVar \| Format "MM/dd/yyyy"}` | `01/01/2020` | | `#{Octopus.Deployment.Created}` | `#{MyVar \| Format "hh:mm:ss tt zz"}` | `11:09:38 AM +01` | ### Replace The *Replace* filter performs a regular expression replace function on the variable. The regular expression should be provided in the [.NET format](https://docs.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-language-quick-reference). Double quotes need to be used around any expressions that contain whitespace or special characters. Expressions containing double quotes can not be expressed inline, but can be done via nested variables. If both the search and replace expressions are variables, ensure there is no space between the expressions. For using Replace on special characters, you should escape the first parameter which will be the regex but the second parameter can be left as a string - see last example below. | MyVar Value | Filter Expression | Output | | ----------- | ----------------------------------------- | ------------------------------------------ | | `abc` | `#{MyVar \| Replace b}` | `ac` | | `abc` | `#{MyVar \| Replace b X}` | `aXc` | | `a b c` | `#{MyVar \| Replace "a b" X}` | `X c` | | `ab12c3` | `#{MyVar \| Replace "[0-9]+" X}` | `abXcX` | | `abc` | `#{MyVar \| Replace "(.)b(.)" "$2X$1" }` | `cXa` | | `abc` | `#{MyVar \| Replace #{match} #{replace}}` | `a_c` (when `match`=`b` and `replace`=`_`) | | `abc` | `#{MyVar \| Replace #{match} _}` | `a_c` (when `match`=`b`) | | `a\b` | `#{MyVar \| Replace "\\" "\\\\"}` | `a\\b` | ### Substring The *Substring* filter extracts a range of characters from the input and outputs them. If two arguments are supplied, they are interpreted as start and end offsets of the range. If only one argument is supplied, it is interpreted as the end offset of a range starting at 0. | MyVar Value | Filter Expression | Output | | ---------------- | --------------------------- | --------- | | `Octopus Deploy` | `#{MyVar \| Substring 8 6}` | `Deploy` | | `Octopus Deploy` | `#{MyVar \| Substring 7}` | `Octopus` | | `Octopus Deploy` | `#{MyVar \| Substring 2 3}` | `top` | ### Trim The *Trim* filter removes any whitespace from the ends of the input. Both ends are trimmed unless an optional argument of `start` or `end` is provided. | MyVar Value | Filter Expression | Output | | ----------- | ------------------------ | -------- | | `···Bar···` | `#{MyVar \| Trim}` | `Bar` | | `···Bar···` | `#{MyVar \| Trim start}` | `Bar···` | | `···Bar···` | `#{MyVar \| Trim end}` | `···Bar` | ### Truncate The *Truncate* filter limits the length of the input. If the input is longer than the length specified by the argument, the rest is replaced with an ellipsis. | MyVar Value | Filter Expression | Output | | ---------------- | ------------------------ | ------------ | | `Octopus Deploy` | `#{MyVar \| Truncate 7}` | `Octopus...` | | `abc` | `#{MyVar \| Truncate 7}` | `abc` | ## Comparison filters {#comparison-filters} These filters return `true` or `false` depending on the result of a comparison. They are typically useful for specifying the condition in an `#{if}` block. | Name | Purpose | Example input | Example output | |----------------------------------------------------------------------|-------------------------------------------------------------------------|------------------|----------------| | [`Contains`](#startswith-endswith-and-contains) | Determines whether a string contains a given string | `Octopus Dep` | `true` | | [`EndsWith`](#startswith-endswith-and-contains) | Determines whether the end of a string matches a given string | `Deploy` | `true` | | [`Match`](#match) | Determines whether a string contains a given regular expression pattern | `"Octo.*Deploy"` | `true` | | [`StartsWith`](#startswith-endswith-and-contains) | Determines whether the beginning of a string matches a given string | `Octo` | `true` | ### Match The *Match* filter searches the input for an occurrence of a given regular expression pattern. It returns `true` if an occurrence is found, and `false` otherwise. The regular expression should be provided in the [.NET format](https://docs.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-language-quick-reference). Double quotes need to be used around any expressions that contain whitespace or special characters. Expressions containing double quotes can not be expressed inline, but can be done via nested variables. | MyVar Value | Filter Expression | Output | | ----------- | ------------------------------ | --------------------------- | | `abc` | `#{MyVar \| Match abc}` | `true` | | `abc` | `#{MyVar \| Match def}` | `false` | | `a b c` | `#{MyVar \| Match "a b"}` | `true` | | `ab12c3` | `#{MyVar \| Match "ab[0-9]+"}` | `true` | | `abc` | `#{MyVar \| Match #{pattern}}` | `true` (when `match`=`abc`) | ### StartsWith, EndsWith and Contains The *StartsWith*, *EndsWith* and *Contains* filters compare the input to a given string argument. They return `true` if the argument matches, and `false` otherwise. The comparison is case-sensitive. Strings are compared as [Ordinals](https://docs.microsoft.com/en-us/dotnet/api/system.stringcomparison). Double quotes need to be used around any expressions that contain whitespace or special characters. Expressions containing double quotes can not be expressed inline, but can be done via nested variables. | MyVar Value | Filter Expression | Output | | ----------- | ----------------------------- | ------------------------- | | `abc` | `#{MyVar \| StartsWith ab}` | `true` | | `abc` | `#{MyVar \| StartsWith bc}` | `false` | | `abc` | `#{MyVar \| StartsWith Ab}` | `false` | | `abc` | `#{MyVar \| EndsWith bc}` | `true` | | `abc` | `#{MyVar \| EndsWith ab}` | `false` | | `abc` | `#{MyVar \| EndsWith bC}` | `false` | | `abc` | `#{MyVar \| Contains bc}` | `true` | | `abc` | `#{MyVar \| Contains ab}` | `true` | | `abc` | `#{MyVar \| Contains AbC}` | `false` | | `a b(c` | `#{MyVar \| Contains " b("}` | `true` | | `a"b"c` | `#{MyVar \| Contains #{str}}` | `true` (when `str`=`"b"`) | ## Conversion filters {#conversion-filters} These filters provide a mechanism to convert a value from one form to another. | Name | Purpose | Example input | Example output | |------------------|--------------------------------------------------|------------------|----------------------------------| | `FromBase64` | Converts values from Base64 (using UTF encoding) | `QmF6` | `Bar` | | `ToBase64` | Converts values to Base64 (using UTF encoding) | `Bar` | `QmF6` | | `MarkdownToHTML` | Converts Markdown to HTML | `This \_rocks\_` | `\

This \rocks\\

` | ## Date filters {#date-filters} These filters are used to work with dates. | Name | Purpose | Example input | Example output | |-----------------------------------------|---------------------------------|---------------|--------------------------------| | [`NowDate`](#nowdate-and-nowdateutc) | Outputs the current date | | `2016-11-03T08:53:11.0946448` | | [`NowDateUtc`](#nowdate-and-nowdateutc) | Outputs the current date in UTC | | `2016-11-02T23:01:46.9441479Z` | ### NowDate and NowDateUtc The *NowDate* and *NowDateUtc* filters take no variable input but can take an additional optional right-hand side argument to define the string format (Defaults to ISO-8601 [Round-trip format](https://msdn.microsoft.com/en-us/library/az4se3k1#Roundtrip)). | MyFormat Variable | Filter Expression | Output | | ----------------- | --------------------------------- | ------------------------------ | | | `#{ \| NowDate }` | `2016-11-03T08:53:11.0946448` | | | `#{ \| NowDateUtc}` | `2016-11-02T23:01:46.9441479Z` | | | `#{ \| NowDate "HH dd-MMM-yyyy"}` | `09 03-Nov-2016` | | | `#{ \| NowDateUtc zz}` | `+00` | | dd-MM-yyyy | `#{ \| NowDate #{MyFormat}}` | `03-Nov-2016` | ## Escaping filters {#escaping-filters} These filters apply format-specific escaping rules. | Name | Purpose | Example input | Example output | |------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|--------------------|------------------------| | `HtmlEscape` | Escapes entities for use in HTML content | `1 < 2` | `1 \< 2` | | `JsonEscape` | Escapes data for use in JSON strings | `He said "Hello!"` | `He said \\"Hello!\\"` | | `PropertiesKeyEscape` | Escapes data for use in .properties keys | `Hey: x=y` | `Hey\:\ x\=y` | | `PropertiesValueEscape` | Escapes data for use in .properties values | `a\b=c` | `a\\b=c` | | [`UriEscape`](https://docs.microsoft.com/en-us/dotnet/api/system.uri.escapeuristring?view=netframework-4.0) | Escape a URI string | `A b:c+d/e` | `A%20b:c+d/e` | | [`UriDataEscape`](https://docs.microsoft.com/en-us/dotnet/api/system.uri.escapedatastring?view=netframework-4.0) | Escape a URI data string | `A b:c+d/e` | `A%20b%3Ac%2Bd%2Fe` | | `XmlEscape` | Escapes entities for use in XML content | `1 < 2` | `1 \< 2` | | `YamlDoubleQuoteEscape` | Escapes data for use in YAML double quoted strings | `"Hello"\Goodbye` | `\"Hello\"\\Goodbye` | | `YamlSingleQuoteEscape` | Escapes data for use in YAML single quoted strings | `The bee's knees` | `The bee''s knees` | ## Extraction filters {#extraction-filters} These filters extract a part of value. | Name | Purpose | Example input | Example output | |-----------------------------------------------|----------------------------------------------------------------------|--------------------------------|----------------| | [`UriPart`](#uripart) | Extracts a specified part of a URI string | `https://octopus.com/docs` | `/docs` | | `VersionMajor` | Extracts the major version field from a version string | `1.2.3.4-my-branch.1.2+build10` | `1` | | `VersionMinor` | Extracts the minor version field from a version string | `1.2.3.4-my-branch.1.2+build10` | `2` | | `VersionPatch` | Extracts the patch version field from a version string | `1.2.3.4-my-branch.1.2+build10` | `3` | | `VersionRevision` | Extracts the revision version field from a version string | `1.2.3.4-my-branch.1.2+build10` | `4` | | `VersionPreRelease` | Extracts the prerelease field from a version string | `1.2.3.4-my-branch.1.2+build10` | `my-branch.1.2` | | `VersionPreReleasePrefix` | Extracts the prefix from the prerelease field from a version string | `1.2.3.4-my-branch.1.2+build10` | `my-branch` | | `VersionPreReleaseCounter` | Extracts the counter from the prerelease field from a version string | `1.2.3.4-my-branch.1.2+build10` | `1.2` | | `VersionMetadata` | Extracts the metadata field from a version string | `1.2.3.4-my-branch.1.2+build10` | `build10` | ### UriPart The *UriPart* filter parses the input as a URI and extracts a specified part of it. A helpful error will be written to the output if there is an error in the input or the filter expression. | MyVar Value | Filter Expression | Output | | --------------------------------------- | ------------------------------------- | -------------------------- | | `https://octopus.com/docs` | `#{MyVar \| UriPart AbsolutePath}` | `/docs` | | `https://octopus.com/docs` | `#{MyVar \| UriPart AbsoluteUri}` | `https://octopus.com/docs` | | `https://octopus.com/docs` | `#{MyVar \| UriPart Authority}` | `octopus.com` | | `https://octopus.com/docs` | `#{MyVar \| UriPart DnsSafeHost}` | `octopus.com` | | `https://octopus.com/docs#filters` | `#{MyVar \| UriPart Fragment}` | `#filters` | | `https://octopus.com/docs` | `#{MyVar \| UriPart Host}` | `octopus.com` | | `https://octopus.com/docs` | `#{MyVar \| UriPart HostAndPort}` | `octopus.com:443` | | `https://octopus.com/docs` | `#{MyVar \| UriPart HostNameType}` | `Dns` | | `https://octopus.com/docs` | `#{MyVar \| UriPart IsAbsoluteUri}` | `true` | | `https://octopus.com/docs` | `#{MyVar \| UriPart IsDefaultPort}` | `true` | | `https://octopus.com/docs` | `#{MyVar \| UriPart IsFile}` | `false` | | `https://octopus.com/docs` | `#{MyVar \| UriPart IsLoopback}` | `false` | | `https://octopus.com/docs` | `#{MyVar \| UriPart IsUnc}` | `false` | | `https://octopus.com/docs` | `#{MyVar \| UriPart Path}` | `/docs` | | `https://octopus.com/docs?filter=faq` | `#{MyVar \| UriPart PathAndQuery}` | `/docs?filter=faq` | | `https://octopus.com/docs` | `#{MyVar \| UriPart Port}` | `443` | | `https://octopus.com/docs?filter=faq` | `#{MyVar \| UriPart Query}` | `?filter=faq` | | `https://octopus.com/docs` | `#{MyVar \| UriPart Scheme}` | `https` | | `https://octopus.com/docs` | `#{MyVar \| UriPart SchemeAndServer}` | `https://octopus.com` | | `https://username:password@octopus.com` | `#{MyVar \| UriPart UserInfo}` | `username:password` | ## Differences from regular variable bindings {#differences-from-regular-bindings} Because of the flexibility provided by the extended syntax, variables that are not defined will result in the source text, e.g. `#{UndefinedVar}` being echoed rather than an empty string, so that evaluation problems are easier to spot and debug. The `if` construct can be used to selectively bind to a variable only when it is defined, e.g. to obtain identical "empty" variable functionality as shown in the first example: ```powershell Server=#{if DatabaseServer}#{DatabaseServer}#{/if}; ``` ## JSON parsing {#json-parsing} Octostache 2.x includes an update to support parsing JSON formatted variables natively, and using their contained properties for variable substitution. Given the variable: | Name | Value | Scope | | --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | ----- | | `Custom.MyJson` | `{Name: "t-shirt", Description: "I am a shirt", Sizes: [{size: "small", price: 15.00}, {size: "large", price: 20.00}]}` | | | `Custom.MyJson.Description` | `Shirts are not shorts.` | | And the template: ```html

#{Custom.MyJson[Name]}

#{Custom.MyJson.Name} - #{Custom.MyJson.Description} From: #{Custom.MyJson.Sizes[0].price | Format C} Sizes: #{Custom.MyJson.Sizes} ``` The result will be: ```powershell

t-shirt

t-shirt - Shirts are not shorts From: $15.00 Sizes: [{size: "small", price: 15.00}, {size: "large", price: 20.00}] ``` There are a few things to note here: - The *Name* property is extracted from the JSON using either dot-notation or indexing. - Providing an explicit project variable overrides one obtained by walking through the JSON. - Arrays can be accessed using standard numerical index notation. - Variables can map to a sub-section of the JSON variable. ### Repetition over json {#repetition-over-json} Given the variables: | Name | Value | | --------- | ------------------------------------------------------------------------------------ | | MyNumbers | `[5,2,4]` | | MyObjects | `{Cat: {Price: 11.5, Description: "Meow"}, Dog: {Price: 17.5, Description: "Woof"}}` | And the template: ```yaml Numbers: #{each number in MyNumbers} - #{number} #{/each} Objects: #{each item in MyObjects} #{item.Key}: #{item.Value.Price} #{/each} ``` The resulting text will be: ```yaml Numbers: - 5 - 2 - 4 Objects: Cat: 11.5 Dog: 17.5 ``` ## Older versions * Comparison filters are available from Octopus Deploy **2021.2** onwards. * `VersionMajor`, `VersionMinor`, `VersionPatch`, `VersionRevision`, `VersionPreRelease`, `VersionPreReleasePrefix`, `VersionPreReleaseCounter` and `VersionMetadata` extraction filters are available from Octopus Deploy **2020.5** onwards. * `PropertiesKeyEscape`, `PropertiesValueEscape`, `YamlDoubleQuoteEscape` and `YamlSingleQuoteEscape` escape filters are available from Octopus Deploy **2020.4** onwards. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variable/1) # Deleting releases Source: https://octopus.com/docs/releases/deleting-releases.md Sometimes you may want to delete releases of your project. Maybe they're defective and you don't want them possibly deployed, or you just want to clean up old releases. This page outlines the methods to permanently delete these releases in Octopus. Deleting individual releases can be done by entering the release page and selecting the `Delete` option in the ... overflow menu. :::figure ![Delete release from release page](/docs/img/releases/images/delete-release-from-release-page.png) ::: You can also delete a batch of releases by specifying a release version range in the Octopus CLI. An example can be found in our [Octopus CLI documentation](/docs/octopus-rest-api/octopus-cli/delete-releases). Consider automating data clean up by configuring [retention policies](/docs/administration/retention-policies). # octopus account list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-list.md List accounts in Octopus Deploy ```text Usage: octopus account list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Changing the collation of the Octopus database Source: https://octopus.com/docs/administration/data/changing-octopus-database-collation.md By default, the Octopus database is created using `Latin1_General_CI_AS` collation. You can change the collation or [create a database](/docs/installation/sql-server-database) with a different collation. :::div{.warning} You must use a case-insensitive collation (with a name containing '\_CI\_'). ::: Changing the collation must be done with care. Changing the collation of a SQL Server database does **not** change the collation of existing user-created objects within. You must also change the collation of all objects in the Octopus Database. Otherwise, errors can occur when modifying the database during Octopus version upgrades.  New objects created will use the updated collation. When attempting to (for example) perform SQL joins between these and existing objects using the original collation, collation mismatch errors may occur. For this reason, when modifying the SQL Server Database during Octopus upgrades, Octopus will verify that all columns use the same collation as the database itself.  If they do not, an error will be logged, and the upgrade will be blocked.  This is to ensure you can rollback or correct the issue and continue, without the database being left in an invalid state. ## Errors during Octopus Server upgrades {#errors-during-upgrades} *Database update prevented: One or more columns in the database are not using the default collation* If you have received the error above while upgrading your Octopus Server, it is likely that at some point the collation on your Octopus database was changed without changing the collation of the existing objects. :::div{.success} If you have received the error above, your database has not been modified, and you can safely revert by re-installing your previous version of Octopus Server. ::: You can run the following SQL against your Octopus database to identify any columns that don't use the database's default collation: **Identify columns with non-default collation** ```sql DECLARE @DatabaseCollation VARCHAR(100) SELECT @DatabaseCollation = collation_name FROM sys.databases WHERE database_id = DB_ID() SELECT @DatabaseCollation 'Default database collation' SELECT t.Name 'Table Name', c.name 'Col Name', ty.name 'Type Name', c.collation_name FROM sys.columns c INNER JOIN sys.tables t ON c.object_id = t.object_id INNER JOIN sys.types ty ON c.system_type_id = ty.system_type_id     WHERE t.is_ms_shipped = 0   AND c.collation_name <> @DatabaseCollation ``` Script taken from [Stack Overflow](http://stackoverflow.com/a/8488567/249431). To resolve the issue, either alter the columns reported by the script above to match the database's collation or alter the database's collation to match the existing columns (assuming all columns are listed). Some of the issues in changing the collation of an entire database are discussed in [this Server Fault question](http://serverfault.com/questions/19577/how-do-i-change-the-collation-of-a-sql-server-database). # Applying Operating System upgrades Source: https://octopus.com/docs/administration/managing-infrastructure/applying-operating-system-upgrades.md You should schedule regular maintenance of the Operating System hosting your Octopus Server to maintain the integrity, performance, and security of your deployments. :::div{.hint} **Recovering from failure** If anything goes wrong during your Operating System maintenance you should restore the Operating System to the state just prior to applying patches and then start Octopus Server again. You should not restore a backup of the Octopus SQL Database. ::: ## Single Octopus Server 1. Schedule a maintenance window with the teams using Octopus. 1. Go to **Configuration ➜ Maintenance** and enable [Maintenance Mode](/docs/administration/managing-infrastructure/maintenance-mode). 1. Wait for any remaining Octopus Tasks to complete by watching the **Configuration ➜ Nodes** page. 1. Stop the Octopus Server service. - At this point you are ready to perform your Operating System maintenance. You should take whatever precautions you deem necessary in order to recover the system in case of failure. That could be a VM snapshot, full disk image backup, or just the automatic Restore Point created by Windows. 1. Apply patches and reboot as required. 1. Start the Octopus Server service. 1. Exit [Maintenance Mode](/docs/administration/managing-infrastructure/maintenance-mode). ## Octopus High Availability If you are using an [Octopus High Availability](/docs/administration/high-availability) cluster you don't need to plan any downtime. Instead, you can just drain the tasks from each node and apply Operating System patches one at a time, while the other nodes continue to orchestrate your deployments. For each node in your cluster: 1. Go to **Configuration ➜ Nodes** and put the node into drain mode. 1. Wait for any remaining Octopus Tasks on that node to complete. 1. Stop the Octopus Server service on that node. - At this point you are ready to perform your Operating System maintenance. You should take whatever precautions you deem necessary in order to recover the system in case of failure. That could be a VM snapshot, full disk image backup, or just the automatic Restore Point created by Windows. 1. Apply patches and reboot as required. 1. Start the Octopus Server service on that node. 1. Exit drain mode for that node so it will start accepting Octopus Tasks again. # Subscription webhook notifications Source: https://octopus.com/docs/administration/managing-infrastructure/subscriptions/webhook-slack.md You can configure Octopus to send message to a [Slack](https://slack.com/) Workspace with the following process: - Configure Octopus Deploy subscription to send a webhook. - Configure a Slack App. - Configure a tool to consume the webhook from Octopus and forward a message on to Slack. :::div{.hint} A number of technologies can be used to consume the webhook from Octopus. This document uses an [Azure Function App](https://docs.microsoft.com/en-us/azure/azure-functions/). Another alternative is to use Firebase Cloud Functions, and this is described in this [blog](https://octopus.com/blog/notifications-with-subscriptions-and-webhooks). ::: ## Configure an Octopus subscription to send a webhook Configure a subscription in Octopus to send any events that occur against the `User`, `User Role`, and `Scoped User Role` documents: :::figure ![Copy webhook URL](/docs/img/administration/managing-infrastructure/subscriptions/images/subscriptions-user-webhook-2.png) ::: As a starting point, the Payload URL is set to a value on [RequestBin](https://requestbin.com/), which provides access to the JSON being sent by Octopus before the function is built. ## Configure your Slack app A Slack app must be configured to enable a message to be sent through to Slack. 1. In Slack, go to [**Your Apps**](https://api.slack.com/apps) and click **Create New App**. 2. Enter a useful **App Name** and select the relevant Development Slack Workspace and click **Create App**: ![Create a Slack App](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-add-app-1.png) 3. Select **Incoming Webhooks** from the **Add features and functionality** section: ![Select Incoming Webhooks](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-add-app-2.png) 4. Click **Add New Webhook to Workspace**: ![Add New Webhook to Workspace](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-add-app-3.png) 5. Select the channel to post the messages to: ![Select the channel](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-add-app-4.png) 6. Copy the webhook URL, this is the value for the `SLACK_URI_APIKEY` environment variable on the Azure Function App: ![Copy webhook URL](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-add-app-5.png) ## Create an Azure Function App ### Create the Function App in Azure The Function App can be created via the [Azure Portal](https://portal.azure.com), and [ARM Template](https://azure.microsoft.com/en-gb/resources/templates/) or with the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest). To use the Azure CLI, create the Resource Group to contain the Function App: ```bash az group create -l westeurope -n OctopusFunctions ``` Create the storage account for the Function App to use: ```bash az storage account create -n octofuncstorage -g OctopusFunctions --sku Standard_LRS -l westeurope ``` Now create the Function App itself: ```bash az functionapp create -g OctopusFunctions -n SubscriptionHandler -s octofuncstorage --functions-version 3 --runtime dotnet --consumption-plan-location westeurope ``` ### Write the Function App code :::div{.hint} The code for this can be found in the [samples repo](https://oc.to/SamplesSubscriptionsRepo). ::: The Octopus subscription has been created and set up to send data to RequestBin. Here's the resulting JSON payload following creation of a new Service Account user: ```json { "Timestamp": "2020-04-16T15:19:42.5410789+00:00", "EventType": "SubscriptionPayload", "Payload": { "ServerUri": "https://samples.octopus.app", "ServerAuditUri": "https://samples.octopus.app/#/configuration/audit?triggerGroups=Document&documentTypes=Users&documentTypes=UserRoles&documentTypes=ScopedUserRoles&from=2020-04-16T15%3a19%3a11.%2b00%3a00&to=2020-04-16T15%3a19%3a42.%2b00%3a00", "BatchProcessingDate": "2020-04-16T15:19:42.3149941+00:00", "Subscription": { "Id": "Subscriptions-21", "Name": "User and User Role Change Alert", "Type": "Event", "IsDisabled": false, "EventNotificationSubscription": { "Filter": { "Users": [], "Projects": [], "ProjectGroups": [], "Environments": [], "EventGroups": [ "Document" ], "EventCategories": [], "EventAgents": [], "Tenants": [], "Tags": [], "DocumentTypes": [ "Users", "UserRoles", "ScopedUserRoles" ] }, "EmailTeams": [], "EmailFrequencyPeriod": "01:00:00", "EmailPriority": "Normal", "EmailDigestLastProcessed": null, "EmailDigestLastProcessedEventAutoId": null, "EmailShowDatesInTimeZoneId": "UTC", "WebhookURI": "https://xxxxxxxx.x.pipedream.net", "WebhookTeams": [], "WebhookTimeout": "00:00:10", "WebhookHeaderKey": null, "WebhookHeaderValue": null, "WebhookLastProcessed": "2020-04-16T15:19:11.3637275+00:00", "WebhookLastProcessedEventAutoId": 53624 }, "SpaceId": "Spaces-142", "Links": { "Self": {} } }, "Event": { "Id": "Events-55136", "RelatedDocumentIds": [ "Users-322" ], "Category": "Created", "UserId": "Users-27", "Username": "xxxxxxxx@octopus.com", "IsService": false, "IdentityEstablishedWith": "Session cookie", "UserAgent": "OctopusClient-js/2020.2.2", "Occurred": "2020-04-16T15:19:14.9925786+00:00", "Message": "User SubTest Service Account has been created ", "MessageHtml": "User SubTest Service Account has been created ", "MessageReferences": [ { "ReferencedDocumentId": "Users-322", "StartIndex": 5, "Length": 23 } ], "Comments": null, "Details": null, "SpaceId": null, "Links": { "Self": {} } }, "BatchId": "b2bdb09f-872a-4bc1-8272-ab2334de3660", "TotalEventsInBatch": 2, "EventNumberInBatch": 1 } } ``` The items that will be used in the Slack message for this example are: - Payload.Event.Message - Payload.Event.SpaceId - Payload.Event.Username - Payload.ServerUri :::div{.hint} If you're using [VS Code](https://docs.microsoft.com/en-us/azure/azure-functions/functions-develop-vs-code?tabs=csharp) to write the code, you need to install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools) to enable debugging for your function. ::: To use these items easily, create a class `OctoMessage` to hold them: ```c# using System; using Newtonsoft.Json; namespace Octopus { [JsonConverter(typeof(JsonPathConverter))] public class OctoMessage { [JsonProperty("Payload.Event.Message")] public string Message {get;set;} [JsonProperty("Payload.Event.SpaceId")] public string SpaceId {get;set;} [JsonProperty("Payload.Event.Username")] public string Username {get;set;} [JsonProperty("Payload.ServerUri")] public string ServerUri{get;set;} public string GetSpaceUrl(){ return string.Format("{0}/app#/{1}",ServerUri,SpaceId); } } } ``` Add a class `SlackClient` to take the message data and send it through to Slack: ```c# public class SlackClient { private readonly Uri _uri; private readonly Encoding _encoding = new UTF8Encoding(); public SlackClient(string slackUrlWithAccessToken) { _uri = new Uri(slackUrlWithAccessToken); } public string PostMessage(string text) { Payload payload = new Payload() { Text = text }; return PostMessage(payload); } public string PostMessage(Payload payload) { string payloadJson = JsonConvert.SerializeObject(payload); using (WebClient client = new WebClient()) { var data = new NameValueCollection(); data["payload"] = payloadJson; var response = client.UploadValues(_uri, "POST", data); return _encoding.GetString(response); } } } ``` The main class of the function `OctopusSlackHttpTrigger`, will receive the HTTP message, take the request message body and deserialize it into an `OctoMessage` object. Next, it will build the message text and send it through to Slack using a `SlackClient` object: ```c# public static class OctopusSlackHttpTrigger { [FunctionName("OctopusSlackHttpTrigger")] public static async Task Run( [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req, ILogger log) { var client = new SlackClient(Environment.GetEnvironmentVariable("SLACK_URI_APIKEY")); var data = await new StreamReader(req.Body).ReadToEndAsync(); var octoMessage = JsonConvert.DeserializeObject(data); var slackMessage = string.Format( "{0} (by {1}) - <{2}|Go to Octopus>", octoMessage.Message, octoMessage.Username, octoMessage.GetSpaceUrl()); try { var responseText = client.PostMessage(text: slackMessage); return new OkObjectResult(responseText); } catch (System.Exception ex) { log.LogError(ex.Message); return new BadRequestObjectResult(ex.Message); } } } ``` ### Test the Azure App Function Before pushing this to Azure it can be tested locally. The `Run` method uses the environment variable `SLACK_URI_APIKEY`, this is the value copied when the Slack app was configured. In order to use this value when debugging locally, add the value to the `local.settings.json` file: ```json { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "dotnet", "SLACK_URI_APIKEY":"https://hooks.slack.com/services/XXXXXXXX/XXXXXX/XXXXXXXXXXXXXXX" } } ``` Hit F5 to compile and run the app, a URL will be output to the terminal to which a `POST` test request can be sent: :::figure ![Debug URL](/docs/img/administration/managing-infrastructure/subscriptions/images/azure-function-debug-1.png) ::: [Postman](https://www.postman.com/) can send a test request, passing in a test JSON payload. :::figure ![Postman](/docs/img/administration/managing-infrastructure/subscriptions/images/azure-function-debug-postman.png) ::: If this is configured correctly, it will return `200 OK`, and the message will appear in Slack! :::figure ![Slack message](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-message.png) ::: ### Build the Azure App Function This example uses [Github Actions](https://github.com/features/actions) to build the function code, package it, and push it to Octopus, which deploys it to Azure. :::figure ![Build output](/docs/img/administration/managing-infrastructure/subscriptions/images/github-action-build-output.png) ::: The build YAML can be found in `.github/workflows/AzureSlackFunction.yaml` in the [samples repo](https://oc.to/SamplesSubscriptionsRepo). ### Deploy the Azure App Function The Azure Function App created here is deployed to with Octopus, using a deployment target type of Azure web app. For more information see, [deploying a package to an Azure web app](/docs/deployments/azure/deploying-a-package-to-an-azure-web-app). A [project](/docs/projects) has been configured to deploy the Function App. :::figure ![Octopus Project](/docs/img/administration/managing-infrastructure/subscriptions/images/octopus-azure-function-project.png) ::: The project has two steps: 1. Deploy the Azure Function App. 2. Set the environment variable for `SLACK_URI_APIKEY`. The project can be viewed in the `AzFuncNotifySlack` project on our Octopus [samples instance](https://oc.to/OctopusAdminSamplesSpace). ## Test the subscription If an Octopus user is changed, the change is shown in the audit trail. :::figure ![Octopus Project](/docs/img/administration/managing-infrastructure/subscriptions/images/user-change-audit-entry.png) ::: And the message is sent from the subscription webhook to the Azure Function App in Azure and then on to Slack. :::figure ![Slack message](/docs/img/administration/managing-infrastructure/subscriptions/images/slack-message-final.png) ::: # Partition Octopus with Spaces Source: https://octopus.com/docs/best-practices/octopus-administration/partition-octopus-with-spaces.md **Octopus Deploy 2019.1** introduced [Spaces](/docs/administration/spaces) as a way to isolate team/divisions/projects from one another. Before configuring spaces, there are a few important items to note. - Spaces are "hard walls," each space has its environments, lifecycles, projects, packages, step templates, and targets. - At the time of this writing, the only thing shared between spaces is users and teams. - A user can have full permissions in one space but read-only permissions in all other spaces. - Listening Tentacles can be registered to multiple spaces and only count against your license once. :::div{.hint} Think of an Octopus Deploy instance as an apartment building. Each space is an apartment in that building, with their own kitchen, living room, and bedrooms. There is some shared infrastructure between apartments, such as the building itself, along with other necessities such as plumbing and electricity. ::: ## Configuring Spaces As spaces are "hard walls," you should consider how often your users will need to switch between spaces in their usual day-to-day work. We've found customers have the most success when spaces are decoupled from one another. For example: - A space containing all the projects that comprise an application suite. For example, all the applications and projects for your CRM. - A space for public-facing applications with another space for all internal applications. - A space for each division within your company. Internally we have opted for a space per application suite. - Octopus Server Space (includes the MSI, pushing to Chocolatey and Docker Hub) - Octopus.com Space (and corresponding CMS) - Integrations Space (build servers, issue trackers, etc.) - And so on ## Antipatterns We've also found several antipatterns with the Spaces feature you should avoid. - A space per team (Team A Space, Team B space, etc.). In larger corporations, applications typically move between teams; a space per team would require you to move projects between spaces. The project export/import makes this easier, but it doesn't copy everything. You'd need to move packages, deployment targets, and workers. Release and Deployment history is not moved either. - A space per environment (Development Space, Production Space, Test Space, etc.). Spaces were not designed, nor do they support this scenario. You would need a way to keep the deployment process in sync across multiple spaces. Such a syncing process is [difficult to create and maintain](/docs/administration/sync-instances). - A space per tenant. Just like the environments per space scenario, spaces were not designed, nor do they support this scenario. You would need a way to keep the deployment process in sync across multiple spaces. - A space per application component. You would need to track a single application across multiple spaces. - Sharing deployment targets across spaces. It is possible to register the same Tentacle, Azure Web App, or K8s cluster across spaces, but that indicates a space is too fine-grained. Sharing deployment targets across spaces only lead to confusion as deployments in one space will appear "locked" because of deployment in another space. ## Prevent sharing of Deployment Targets A tentacle trusts the entire Octopus Server, not a specific space. It is not possible to prevent a tentacle from being shared across multiple spaces. Polling tentacles are harder to configure, but possible. For other deployment targets, such as Azure Web Apps, or K8s clusters, you would have to re-key the credentials in each space. As such, store those credentials in a secure location and limit access to them. ## Sharing Workers Sharing workers configured as listening Tentacles is very easy to do. In a lot of cases, the servers hosting the workers are underutilized. Sharing workers between spaces can be beneficial from a cost and maintenance standpoint. Polling Tentacles configured as Workers can be used in multiple spaces by running the [register-worker](/docs/octopus-rest-api/tentacle.exe-command-line/register-worker) command. There are some considerations when sharing workers. - The Tentacle agent on the worker can be running as a specific Active Directory account. - The Tentacle agent could be running on an EC2 instance with a specific IAM role attached. - When workers download packages, they require a mutex; no other task can be running on that worker. 99% of the time, this isn't noticed. However, if a worker runs a 10-hour integration test, you run the risk of getting stuck behind that test waiting for the mutex to be created. Have a separate set of workers to run these long-running tasks. ## Moving Projects between Spaces Don't worry if you don't get your space configuration right the first time. It is a high bar to expect perfection the first time. Starting with **Octopus Server 2021.1** we offer the ability to [export and import projects between spaces](/docs/projects/export-import). You can configure your instance with every project using the default space. You can decide later how you want to split up your instance. ## Further reading For further reading on spaces in Octopus Deploy please see: - [Spaces](/docs/administration/spaces) - [Exporting and Importing Projects](/docs/projects/export-import) # High Availability Source: https://octopus.com/docs/best-practices/self-hosted-octopus/high-availability.md Octopus: High Availability (HA) enables you to run multiple Octopus Server nodes, distributing load and tasks between them. We designed it for enterprises that need to deploy around the clock and rely on the Octopus Server being available. :::figure ![High availability diagram](/docs/administration/high-availability/images/high-availability.svg) ::: An Octopus High Availability configuration requires four main components: - **A load balancer** This will direct user traffic bound for the Octopus web interface between the different Octopus Server nodes. - **Octopus Server nodes** These run the Octopus Server service. They serve user traffic and orchestrate deployments. - **A database** Most data used by the Octopus Server nodes is stored in this database. - **Shared storage** Some larger files - like [packages](/docs/packaging-applications/package-repositories), artifacts, and deployment task logs - aren't suitable to be stored in the database, and so must be stored in a shared folder available to all nodes. :::div{.hint} One of the benefits of High Availability is the database and file storage is running on a separate infrastructure from the Octopus Server service. For a production instance, we recommend everyone follow the steps below, even if you plan on running a single node instance. If anything were to happen to that single node, you could be back up and running quickly with a minimal amount of effort. In addition, adding a second node later will be much easier. ::: This implementation guide will help configure High Availability. If you are looking for an in-depth set of recommendations, please refer to our white paper on [Best Practices for Self-Hosted Octopus Deploy HA/DR](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). ## How High Availability Works High Availability (HA) distributes load between multiple nodes. There are two kinds of load an Octopus Server node encounters: 1. Tasks (Deployments, runbook runs, health checks, package re-indexing, system integrity checks, etc.) 2. User Interface via the Web UI and REST API (Users, build server integrations, deployment target registrations, etc.) Tasks are placed onto a first-in-first-out (FIFO) queue. By default, each Octopus Deploy node is configured to process five (5) tasks concurrently, which [can be updated in the UI](/docs/support/increase-the-octopus-server-task-cap). That is known as the task cap. Once the task cap is reached, the remaining tasks in the queue will wait until one of the other tasks is finished. Each Octopus Server node has a separate task cap. High Availability allows you to scale the task cap horizontally. If you have two (2) Octopus Server nodes each with a task cap of 10, you can process 20 concurrent tasks. Each node will pull items from the task queue and process them. Learn more about [how High Availability processes tasks in the queue](/docs/administration/high-availability/how-high-availability-works) section. ## High Availability Limits Octopus Deploy's High Availability functionality provides many benefits, but it has limits. 1. All Octopus Server nodes must run the same version of Octopus Deploy. Upgrading to a newer version of Octopus Deploy will require an outage as you upgrade all nodes. 1. You cannot specify the node a deployment or runbook run to execute on. Octopus Deploy uses a FIFO queue, nodes will pick up any pending tasks. 1. If a deployment or runbook run fails, it fails. Octopus Deploy will not automatically attempt to re-run that failed deployment or runbook run on a different node. In our experience, changing nodes rarely has been the solution to a failed deployment or runbook run. 1. All the Octopus Server nodes must connect to the same database. 1. Octopus Server nodes have no concept of a "read-only" connection to a database. All online nodes perform write operations to the database. Even if it is not processing tasks. 1. Octopus Server nodes are sensitive to latency to SQL Server and the file storage. The Octopus Server nodes, SQL Server, and file storage should all be located in the same data center or cloud region. The latency between availability zones within the same cloud region is acceptable. While the latency between cloud regions or data centers is not. Generally, these limits are encountered when our users attempt to use Octopus Deploy's High Availability functionality for disaster recovery in a hot/hot configuration. A hot/hot configuration between two or more data centers or cloud regions is not supported or recommended. Please see our white paper on recommendations for [high availability and disaster recovery](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). ## Calculating Task Cap The amount of computing resources required for the Octopus Server nodes and database is dependent on the task cap. The higher the task cap, the more resources you'll need. To calculate the task cap we recommend using the number of applications or projects you need to deploy during the production deployment window. - Deployments and runbook runs are the most common tasks. - Deployments typically take longer than any other task, including runbook runs. - Production deployments are time-constrained. They are done off-hours during an outage window. Once you know the number of projects and the duration of the window, you can calculate the task cap using the average deployment duration. If you don't know the average deployment duration, use 30 minutes as the starting point. The formula is: ``` (Number of Projects to Deploy * Average Deployment Duration) / Production Deployment Window in Minutes ``` For example, you need to deploy 50 applications, each taking 30 minutes to deploy. You have two hours (120 minutes) to deploy all the applications. - 50 Applications * 30 Minutes = 1500 - 1500 / 120 Minutes (two-hour production deployment window) = 12.5 That means at a minimum, you'd need a task cap of 13. A safe option would be 16 to account for any longer-running deployments or other tasks that need to run. Once you know the max task cap, you can divide that by the number of nodes you need for the HA cluster. If you need a max task cap of 16 and plan on having two nodes, each node would have a task cap of 8. We've created a series of lookup tables where you can see the number of deployments for [popular task cap configurations](/docs/octopus-cloud/task-cap#how-to-choose-a-task-cap). ## Licensing Each Octopus Deploy SQL Server database is a unique **Instance**. Nodes are the Octopus Server service that connects to the database. High Availability occurs when two or more nodes connect to the same Octopus Deploy database. An HA Cluster refers to all components, the load balancer, nodes, database, and shared storage. For self-hosted customers, High Availability is available for the following license types: - Professional: limited to 2 nodes - Enterprise: unlimited nodes The node limit is included in the license key in the NodeLimit node. ```xml Unlimited ``` If you do not have that node in your license key then you are limited to a single node. If you recently purchased a license key and it is missing that node then reach out to [sales@octopus.com](mailto:sales@octopus.com). ## Infrastructure Octopus Deploy's High Availability functionality requires you to create and configure infrastructure for Octopus Server nodes, a database, shared storage, and a load balancer. This section will provide the necessary details required for each of those components. ### Octopus Server nodes The Octopus Server nodes host and run the Octopus Server service. We support running Octopus Deploy on Windows Server 2016 or greater as well as running Octopus Deploy in a container. - [Octopus Deploy MSI](https://octopus.com/downloads) - [Octopus Deploy Container](https://hub.docker.com/r/octopusdeploy/octopusdeploy) - [Octopus Deploy Helm Chart](https://github.com/OctopusDeploy/helm-charts/tree/main/charts/octopus-deploy#usage) :::div{.warning} Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method. ::: How to install and configure Octopus Deploy on these nodes is outside the scope of this implementation guide. You can find detailed instructions in the links below. - [Installing Octopus on Windows](/docs/installation#install-octopus) - [Running Octopus on Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) - [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) Regardless of the host, you'll need to determine how many nodes you want in your HA cluster and how much compute resources each node needs. #### Number of Octopus Server nodes Most of our customers have between two (2) and four (4) nodes. Generally, more nodes are better (up to a point). Restarting a node in a four-node cluster will reduce your capacity by 25%, while doing the same for a two-node cluster will reduce capacity by 50%. There is a margin of diminishing returns. We don't recommend going beyond six (6) to eight (8) nodes. At that point, you'll see a margin of diminishing returns. #### Octopus Server node compute resources Below is a baseline for setting compute resources based on the task cap. You are responsible for monitoring the compute resource utilization of your Octopus Server nodes to ensure you aren't over or under-provisioning. | Task Cap Per Node | Windows Compute Resources | Container Compute Resources | | ----------------- | ------------------------- | ---------------------------------- | | 5 - 10 | 2 Cores / 4 GB RAM | 150m - 1000m / 1500 Mi - 3000 Mi | | 20 | 4 Cores / 8 GB RAM | 1000m - 2000m / 3000 Mi - 6000 Mi | | 40 | 8 Cores / 16 GB RAM | 1250m - 2500m / 4000 Mi - 8000 Mi | | 80 | 16 Cores / 32 GB RAM | 2000m - 4000m / 5000 Mi - 10000 Mi | | 160 | 32 Cores / 64 GB RAM | 3500m - 7000m / 6000 Mi - 12000 Mi | While possible to have an Octopus Server node with a task cap above 160, it isn't recommended. Once you go beyond those limits you'll likely run into underlying Host OS, .NET, or networking limitations. We've found 40-80 to be a reasonable max for the task cap for each node. It is far better to scale horizontally by adding more nodes. :::div{.hint} In our research the biggest limiting factor in processing concurrent tasks is the database. It won't matter how many nodes or how big they are if the database cannot handle the load. You have to scale your database resources as you increase the overall task cap. ::: ### Database Octopus Deploy stores project, environment, and deployment-related data in a shared Microsoft SQL Server Database. You can host that SQL Database on a self-managed SQL server, or use one of the many popular cloud providers. We recommend picking the option based on where you plan on hosting Octopus Deploy. #### Database Compute Resources The amount of compute resources to assign the databases is based on the total amount of concurrent tasks you wish to process. Below is a baseline of resources. You are responsible for monitoring the compute resource utilization of your database to ensure you aren't over or under-provisioning. We have some customers in Octopus Cloud who require 3200 DTUs due to their Octopus Deploy usage. | Total Task Cap | Virtual Machine Host | Azure DTUs | | -------------- | ------------------------- | ------------ | | 5 - 10 | 2 Cores / 4 GB RAM | 50 DTUs | | 20 | 2 Cores / 8 GB RAM | 100 DTUs | | 40 | 4 Cores / 16 GB RAM | 200 DTUs | | 80 | 8 Cores / 32 GB RAM | 400 DTUs | | 160 | 16 Cores / 64 GB RAM | 800 DTUs | #### Database High Availability Since the database is shared, the database server must be also highly available. Octopus Deploy supports a variety of SQL Server editions, from SQL Server Express up to Enterprise as well as managed SQL Server. How the database is made highly available is really up to you; to Octopus, it's just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. Octopus High Availability works with: - [SQL Server Failover Clusters](https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/high-availability-solutions-sql-server) - [SQL Server Always On Availability Groups](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) Each of the popular cloud providers provides some version of SQL Server Always On Availability groups. Please see the links to the implementation guides below for details for your specific cloud provider. :::div{.warning} Octopus High Availability has not been tested with Log Shipping or Database Mirroring and does not support SQL Server replication. ::: #### Database Implementation Guides If you wish to learn more about how to configure Octopus Deploy with a specific hosting option, please refer to our installation guides. - [Self-Managed SQL Server](/docs/installation/sql-database/self-managed-sql-server) - [AWS RDS](/docs/installation/sql-database/aws-rds) - [Azure SQL](/docs/installation/sql-database/azure-sql) - [GCP SQL](/docs/installation/sql-database/gcp-cloud-sql) ### File Storage Octopus stores several files that are not suitable to store in the database. These include: - Packages used by the [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository). These packages can often be very large in size. - [Artifacts](/docs/projects/deployment-process/artifacts) collected during a deployment. Teams using Octopus sometimes use this feature to collect large log files and other files from machines during a deployment. - Task logs are text files that store all log output from deployments and other tasks. - Imported zip files used by the [Export/Import Projects feature](/docs/projects/export-import). - Archived audit logs by the [Archived audit logs feature](/docs/security/users-and-teams/auditing/#archived-audit-events). As with the database, you'll tell the Octopus Servers where to store them as a file path within your operating system. The shared storage needs to be accessible by all Octopus nodes. Each of these three types of data can be stored in a different location. Whichever way you provide the shared storage, there are a few considerations to keep in mind: - To Octopus, it needs to appear as either: - A mapped network drive e.g. `X:\` - A UNC path to a file share e.g. `\\server\share` - For Linux containers they need to be a volume mount. - The service account that Octopus runs needs **full control** over the directory. - Drives are mapped per user, so you should map the drive using the same service account that Octopus is running under. Your file storage should be hosted in the same data center or cloud region as the Octopus Server nodes. We've included guides for the most common file storage options we encounter. - [Local File Storage](/docs/installation/file-storage/local-storage) - [AWS File Storage](/docs/installation/file-storage/aws-file-storage) - [Azure File Storage](/docs/installation/file-storage/azure-file-storage) - [GCP File Storage](/docs/installation/file-storage/gcp-file-storage) ### Load Balancer Octopus Deploy only has two possible inbound connections. 1. Web UI / Web API over http/https (ports 80/443) 2. Polling Tentacles over TCP (port 10943) :::div{.hint} For the Web UI and API traffic you can leverage SSL offloading. For Polling Tentacles, SSL offloading is not supported. ::: #### Health Checks Octopus Deploy provides an endpoint you can use for health checks for your load balancer to ping: `/api/octopusservernodes/ping`. Making a standard `HTTP GET` request to this URL on your Octopus Server nodes will return: - HTTP Status Code `200 OK` as long as the Octopus Server node is online and not in [drain mode](#drain). - HTTP Status Code `418 I'm a teapot` when the Octopus Server node is online, but it is currently in [drain mode](#drain) preparing for maintenance. - Anything else indicates the Octopus Server node is offline, or something has gone wrong with this node. :::div{.hint} The Octopus Server node configuration is also returned as JSON in the HTTP response body. ::: #### Traffic distribution We recommend using a round-robin (or similar) approach for sharing traffic between the nodes in your cluster, as the Octopus Web Portal is stateless. #### Auditing Traffic Audit events include the IP address of the client that initiated the request. As High Availability redirects user traffic through a load balancer, the default value of the IP address in audit events will be the IP address of the load balancer rather than the client's IP address. See [IP address forwarding](/docs/security/users-and-teams/auditing/#ip-address-forwarding) for configuring trusted proxies in Octopus #### Request size and timeout All package uploads are sent as a POST to the REST API endpoint `/api/[SPACE-ID]/packages/raw`. Because the REST API will be behind a load balancer, you'll need to configure the following on the load balancer: - Timeout: Octopus is designed to handle 1 GB+ packages, which takes longer than the typical http/https timeout to upload. - Request Size: Octopus does not have a size limit on the request body for packages. Some load balancers only allow 2 or 3 MB files by default. #### Polling Tentacles Polling Tentacles poll each Octopus Server node at regular intervals to see if that node has picked up a task. Using Polling Tentacles with HA requires every Polling Tentacle to be able to connect to all nodes. You have two options: 1. Using a unique address per node with the default port of `10943`. - Node1 would be: Octo1.domain.com:10943 - Node2 would be: Octo2.domain.com:10943 - Node3 would be: Octo3.domain.com:10943 2. Using the same address with a different port per node. - Node1 would be: octopus.domain.com:10943 - Node2 would be: octopus.domain.com:10944 - Node3 would be: octopus.domain.com:10945 :::div{.hint} For Polling Tentacles, SSL offloading is not supported. Octopus Deploy and the Tentacle establishes a two-way trust using the certificates created by Octopus Deploy and the Tentacle. If either of them doesn't match, the connection is closed and all commands are rejected. ::: #### Additional Load Balancer Resources We've created guides for configuring many popular load balancers. - Local Options - [Using NGINX as a reverse proxy with Octopus](/docs/installation/load-balancers/use-nginx-as-reverse-proxy) - [Using IIS as a reverse proxy with Octopus](/docs/installation/load-balancers/use-iis-as-reverse-proxy) - [Configuring Netscaler](/docs/installation/load-balancers/configuring-netscaler) - [AWS Load Balancers](/docs/installation/load-balancers/aws-load-balancers) - [Azure Load Balancers](/docs/installation/load-balancers/azure-load-balancers) - [GCP Load Balancers](/docs/installation/load-balancers/gcp-load-balancers) ## Octopus Deploy Configuration Once the infrastructure is in place to support high availability, you can then start configuring Octopus Deploy to leverage it. The good news is if you have an existing instance in place you can update the configuration without having to rebuild everything. ### Creating a new instance When creating a new instance, you must start with a single node. Once that is up and running, you can add additional nodes. When you create a new Octopus Deploy instance, it will run a series of SQL Scripts to populate the Octopus Deploy database with the appropriate tables, views, and stored procedures. #### Windows Host Follow these steps if you elect to host Octopus Deploy on Windows Servers. 1. Download the latest MSI from [Octopus Downloads](https://octopus.com/downloads) 1. Install the MSI on the Windows Server. Once complete, it will start the Octopus Setup Wizard. 1. Follow the wizard and complete the configuration. 1. Once the setup wizard is complete, you'll be taken to the Octopus Manager. Now is a good time to [retrieving the master key](/docs/security/data-encryption#your-master-key). That master key is required to add additional nodes to your High Availability Cluster. 1. Run the following script to configure the BLOB storage. ``` Octopus.Server.exe path --clusterShared \\OctoShared\OctopusData ``` Learn more at [Installing Octopus Deploy Overview](/docs/installation#install-octopus) #### Container Host Follow these steps if you elect to host Octopus Deploy on a container host like Kubernetes. 1. Generate a Master Key using `openssl rand 16 | base64`. 1. Send that master key, along with the database connection string and volume mounts to the container or helm chart. 1. Once the first node is up and running, you can add additional nodes. Additional container host resources: - [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) - [Octopus Server in Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) ### Migrating an existing instance Migrating an existing instance is possible, and for most configurations can be completed in as little as a few hours. However, you should be aware of the steps. Please read through each section below before starting a migration. #### Backup the Master Key Before getting started, it is important to ensure you have a backup of the master key. The master key is used by Octopus Deploy to encrypt and decrypt data within the Octopus Deploy database. If this master key is lost, you will have to reset all the encrypted items in your database. Learn more about [retrieving the master key](/docs/security/data-encryption#your-master-key). #### Migrating Database In our experience, it is uncommon to have the Octopus Deploy service and database running on the same server for a production instance. You can skip this step if your database is already running on a SQL Server cluster, on a managed server like AWS RDS or Azure SQL, or leveraging always-on High Availability. If you need to move the database the process is: 1. Turn off the Octopus Deploy service. 1. Perform a full backup of the existing database. 1. Restore the database to the desired SQL Server location. 1. Update the connection string for the existing service. 1. Turn back on the Octopus Deploy service. Learn more about [moving the Octopus Server Database](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-database). #### Migrating File Storage Typically, the file storage takes the most time of the high availability migration. The good news is you can do most of the work prior to the cutover. The file storage stores items like deployment logs and runbook run logs. Once a deployment or runbook run is complete, Octopus Deploy will leave those files until they are deleted by the retention policies. The following work can be completed without turning off any Octopus Server nodes. Your Octopus instance might have years worth of data. It can take hours or days to finish copying all the files over. 1. Create the main directory and subdirectories. 1. TaskLogs 1. Artifacts 1. Packages 1. Imports 1. EventExports 1. Telemetry 1. Using tools such as `robocopy` or `rsync` copy the files and subdirectories to the corresponding folder. Leverage the mirror functionality to ensure your file share folder structure matches the original. Once the files are copied over, you can update your Octopus Deploy instance to point to the file share. - Turn off the Octopus Deploy service. - Run `robocopy` or `rsync` one final time to pick up any new files since the last sync. - Run the following PowerShell script to update Octopus to point to the new directory. ``` Set-Location "C:\Program Files\Octopus Deploy\Octopus" $filePath = "YOUR ROOT DIRECTORY" & .\Octopus.Server.exe path --clusterShared "$filePath" ``` - Turn back on your Octopus Deploy service. Learn more about [moving the Octopus Server folders](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-home-directory#move-other-folders). #### Migrating to the Load Balancer For Web UI and API traffic, migrating to a load balancer should be seamless. Use the configuration information from an earlier section in this document to configure the load balancer. Once you've verified all the traffic is working as expected, then provide the new URL to your users. ### Adding Nodes Generally, the same process is followed when adding new nodes to an existing High Availability cluster. 1. Ensure the new host, be it Windows or Containers, can connect to the Octopus Deploy database and file storage. 1. Run a script to configure the Octopus Server node instance on a Windows machine or to start up a new container. You'll need to provide the master key and database connection information. For containers, you'll also need to provide the volume mounts. 1. Add that new node to the load balancers. 1. Update all the polling tentacles to connect to that new node. :::div{.hint} Because all the configuration is stored in the database and blob storage, you can delete all the nodes and create new ones if you so desire. ::: We recommend writing scripts to automate this process. - [Octopus Server on Windows](/docs/installation/automating-installation) - [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) - [Octopus Server in Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) ### Polling Tentacles and Kubernetes Agent with High Availability Once the load balancer is configured to expose each Octopus Server node, you must register them with each polling tentacle. You can use this PowerShell script as a basis for your automation. The script should add any new nodes you've created. If you added two nodes to your High Availability cluster, your script would look like this. ``` C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=Octo2.domain.com:10943 --apikey=YOUR_API_KEY C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=Octo3.domain.com:10943 --apikey=YOUR_API_KEY ``` More details at [Polling Tentacles with Octopus High Availability](/docs/administration/high-availability/polling-tentacles-with-ha) ## Maintenance Most of the maintenance concerns for an Octopus Deploy High Available cluster are related to: - The availability of each node. - The work each node is performing. - Each node's connectivity to the database and file storage. - Each node's connectivity to all the deployment targets. - Ensuring all the nodes are running the same version of Octopus Deploy. ### Node configuration page A dedicated page to High Availability within the Octopus Deploy user interface can be accessed via **Configuration -> Nodes.** That page provides the following functionality: - The number of nodes your HA cluster has registered. - The last time each node "checked-in" or was seen. - The number of tasks each node is processing. - Changing the task cap on each node via the overflow menu. - Draining a specific node via the overflow menu will stop it from processing tasks. - Deleting a specific node via the overflow menu will remove it from the HA cluster. #### Node Status and Last Seen A healthy node will update the **Last Seen** date on the node configuration page every 60 seconds or so. The code to update that last seen date runs on a dedicated thread and will do its very best to update that date. That means there is a problem if that value isn't updated for a specific node in a while. #### Modifying the task cap This page enables you to change the task cap for each node. We recommend having the same task cap for all nodes, however there are use cases in which you want a different value for each node. - Setting the task cap to 0 to have "UI Only" nodes that do not process any tasks. - Setting the task cap to 1 for a new node in a canary-style deployment to ensure everything is working as expected before setting it to the default value. - Setting the task cap to a lower value than the others because the server doesn't have the same compute allocations. ### Draining a node The drain toggle can be used to prevent an Octopus Server node from executing any new tasks. All existing tasks running on that node will continue to run until completion. This is useful when you want to restart or shut down a node, remove a node, or you want to upgrade the Octopus version number. While draining: - An Octopus Server node will finish running any tasks is it currently executing and then idle. - The Octopus Server ping URL will not return 200 OK. ### Deleting a node Once a node has been retired, you can delete it from the HA Cluster using the node configuration page. It is important to note that deleting an active node will have minimal impact. Every 60 seconds the nodes will perform a check-in where they update the "last seen" date. If the node is not present in the table, it will automatically add itself. ### Auto-scaling nodes It is possible to leverage AWS Auto-Scaling Groups, Azure Virtual Machine Scale Sets, or Kubernetes auto-scaling capability to automatically horizontally scale your nodes. Adding nodes is fairly trivial, removing them is much more difficult due to how Octopus processes tasks. The process for adding nodes is: 1. Create the new application host. 1. Download and install the same version as the other nodes. 1. configure any volume mounts for file storage. 1. Configure the Octopus Server node using the master key and database connection information. The process for removing a node is: 1. Use the API to set the node to draining using an API key. 1. Wait for all the tasks to be completed. Failure to do so will cause those tasks to fail. 1. Delete the application host. The complexity of removing a node is due to having to invoke the API to drain the node and waiting for the node to complete any in-flight tasks. For cloud providers such as Azure or AWS that typically means leveraging a function or a Lambda. For scripts and examples, please refer to the [auto-scaling high availability nodes page](/docs/administration/high-availability/auto-scaling-high-availability-nodes). ### Connectivity issues with the database If an Octopus Server node cannot connect to the database it will start then immediately stop. The logs will indicate a connection failure. ### Connectivity issues with the file storage Octopus Deploy is generally more forgiving if it cannot access the files. There are scenarios in which the Octopus Deploy service will not start, typically on permissions denied or the path cannot be found. However, if the paths exist, but they don't have data, then Octopus will continue to run. The telltale signs are: - The service gets a permissions denied error and stops. - Empty deployment logs for completed deployments. - Missing deployment artifacts. - Missing project and space images. - Packages cannot be found in the built-in repository. The paths to the file storage are stored within the database. That means all nodes will use the same path. - Ensure the file storage paths point to a file share. The file storage must be on a file share accessible by all nodes. It cannot be a local drive. - When running Octopus Deploy on Windows ensure the account the Octopus Deploy Windows Service is running as can access those file shares. - When running Octopus Deploy on a container ensure the volumes are all mounted properly. ### Upgrading the Octopus Deploy version Upgrading an HA cluster to the latest Octopus Deploy version will require an outage window as all nodes must be upgraded at the same time. Octopus Deploy provides the capability to upgrade from almost any modern version. For example, upgrading from 2022.2 to 2024.2. Because of that, database changes are not backward compatible. The upgrade process will be: 1. Backup the master key. 1. Drain all the nodes. 1. After all tasks are finished, stop all the nodes 1. Backup the database. 1. Install the latest version on one node (this will upgrade the database). 1. Upgrade the remaining nodes. 1. Start the nodes. 1. Disable the draining. More detailed instructions can be found in the [upgrading guide](/docs/administration/upgrading/guide). # AWS Source: https://octopus.com/docs/deployments/aws.md Octopus Deploy includes dedicated integration with Amazon Web Services (AWS) to help you achieve repeatable, recoverable, secure, and auditable deployments: - [Deploy Amazon ECS Service](/docs/deployments/aws/ecs) is a UI-driven step with an opinionated deployment workflow that builds the CloudFormation template for you. - [Deploy an AWS CloudFormation Template](/docs/deployments/aws/cloudformation) allows you to create or update a CloudFormation stack. It offers more flexibility than the UI step. - [Delete an AWS CloudFormation stack](/docs/deployments/aws/removecloudformation) deletes existing CloudFormation stacks. - [Upload a package to an AWS S3 bucket](/docs/deployments/aws/s3) allows you to upload files and packages to S3 buckets. - [Run an AWS CLI Script](/docs/deployments/custom-scripts/aws-cli-scripts) runs a custom script with AWS credentials preloaded. :::div{.hint} **Where do AWS Steps execute?** All AWS steps execute on a worker. By default, that will be the [built-in worker](/docs/infrastructure/workers/#built-in-worker) in the Octopus Server. Learn about [workers](/docs/infrastructure/workers) and the different configuration options. ::: ## Get started with ECS or deploy a new service The [Deploy Amazon ECS Service](/docs/deployments/aws/ecs) step makes it easier to get started or deploy a new service through Octopus. The step guides you through the configuration of the task definition and service with built-in validation. Octopus generates and executes the CloudFormation templates, so you don't have to write any YAML or JSON. :::figure ![A rocket links the Deploy Amazon ECS Service step in Octopus with tasks performed by Octopus in AWS to deploy the Octo Pet Shop website. Octopus generated the CloudFormation template and created and deployed the CloudFormation stack.](/docs/img/deployments/aws/octopus-ecs-integration-deploy-to-fargate.png) ::: With the UI step, you can: - Monitor the deployment and service status, feedback, and error messages from Octopus. You don't need to open the AWS Console. - Deploy updates to your containerized apps without changing the deployment process. Create a release with the new version, and Octopus updates the task definition and service for you. - Set a timeout duration so you're not waiting hours to learn a deployment is stuck. When you outgrow the guided UI step or need more flexibility, you can expose the underlying CloudFormation YAML and paste it into the [Deploy an AWS CloudFormation Template](/docs/deployments/aws/cloudformation) step. ## Centralize and secure your ECS deployments with Octopus Octopus offers a central platform to manage your AWS resources, including account credentials, ECS clusters, certificates, configuration, and scripts. The ECS [deployment target](/docs/getting-started/first-deployment/add-deployment-targets/) and steps integrate with other Octopus features, including [built-in AWS service accounts](/docs/infrastructure/accounts/aws/), [runbooks](/docs/runbooks/), [variables](/docs/projects/variables/), [channels](/docs/releases/channels/), and [lifecycles](/docs/releases/lifecycles). Octopus projects and runbooks share the same variables and accounts, making it easier to capture shared procedures, automate routine maintenance and respond quickly to emergencies. Flexible, [role-based security](/docs/security/users-and-teams/user-roles/) allows you to decide who can deploy to production and trigger runbooks against specific clusters. You can view the history of significant events and changes in the Octopus [audit log](/docs/security/users-and-teams/auditing). ## Learn more - [AWS blog posts](https://octopus.com/blog/tag/aws/1) - [AWS runbook examples](/docs/runbooks/runbook-examples/aws) # Deploying a package to an Azure Service Fabric cluster Source: https://octopus.com/docs/deployments/azure/service-fabric/deploying-a-package-to-a-service-fabric-cluster.md Octopus Deploy supports deployment of [Azure Service Fabric applications](https://azure.microsoft.com/en-au/services/service-fabric/). :::div{.hint} The [Service Fabric SDK](https://oc.to/ServiceFabricSdkDownload) must be installed on the Octopus Server. If this SDK is missing, the step will fail with an error: _"Could not find the Azure Service Fabric SDK on this server."_ **PowerShell Script Execution** may also need to be enabled. See the _"Enable PowerShell script execution"_ section from the above link for more details. After the above SDK has been installed, you need to restart your Octopus service for the changes to take effect. ::: ## Step 1: Create a Service Fabric cluster Create a Service Fabric cluster (either in Azure, on-premises, or in other clouds). Octopus needs an existing Service Fabric cluster to connect to in order to deploy your application package. ## Step 2: Packaging Package your Service Fabric application. See our guide to [Packaging a Service Fabric application](/docs/deployments/azure/service-fabric/packaging). ## Step 3: Create a Service Fabric deployment target You will need to create a [Service Fabric Deployment Target](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets) for each cluster you are deploying to. ## Step 4: Create the Service Fabric application deployment step Add a new Service Fabric application deployment step to your project. For information about adding a step to the deployment process, see the [add step](/docs/projects/steps) section. ## Step 5: Configure your Service Fabric application step Select the [target tag](/docs/infrastructure/deployment-targets/target-tags) you assigned your Service Fabric target and select your Service Fabric package from your package feed. Select and configure the security mode required to connect to your cluster. The various security modes are described in detail in the [Deploying to Service Fabric documentation](/docs/deployments/azure/service-fabric) Various options are available to deploy your Service Fabric application. | Setting | Default | Description | | ------------------------------------------------------ | ----------- | ---------------------------------------- | | Publish profile file | PublishProfiles\Cloud.xml | Path to the file containing the publish profile | | Deploy only | Disabled | Indicates that the Service Fabric application should not be created or upgraded after registering the application type | | Unregister unused application versions after upgrade | Disabled | Indicates whether to unregister any unused application versions that exist after an upgrade is finished | | Override upgrade behavior | None | Indicates the behavior used to override the upgrade settings specified by the publish profile. Options are _None_, _ForceUpgrade_, _VetoUpgrade_. To force an upgrade regardless of the publish profile setting set this option to _ForceUpgrade_. To use the setting defined in publish profile set this setting to _None_. | | Overwrite behavior | SameAppTypeAndVersion | Overwrite Behavior if an application exists in the cluster with the same name. Available options are _Never_, _Always_, _SameAppTypeAndVersion_. This setting is not applicable when upgrading an application | | Skip package validation | Disabled | Switch signaling whether the package should be validated or not before deployment | | Copy package timeout (seconds) | SDK Default | Timeout in seconds for copying application package to image store | | Register Application Type Timeout (seconds) | SDK Default | Timeout in seconds for registering application type. Requires Service Fabric SDK version 6.2+ | :::div{.success} **Use Variable Binding Expressions** Any of the settings above can be switched to use a variable binding expression. A common example is when you use a naming-convention for your application services, like **MyFabricApplication\_Production** and **MyFabricApplication\_Test**, you can use environment-scoped variables to automatically configure this step depending on the environment you are targeting. ::: ### Deployment features available to Service Fabric application steps The following features are available when deploying a package to a Service Fabric application: - [Custom Scripts](/docs/deployments/custom-scripts) - [Configuration Variables](/docs/projects/steps/configuration-features/xml-configuration-variables-feature) - [.NET Configuration Transforms](/docs/projects/steps/configuration-features/configuration-transforms) - [Structured Configuration Variables](/docs/projects/steps/configuration-features/structured-configuration-variables-feature) - [Substitute Variables in Templates](/docs/projects/steps/configuration-features/substitute-variables-in-templates) :::div{.hint} Please note these features run on the Octopus Server prior to deploying the Service Fabric application to your cluster. They don't execute in the cluster nodes you are eventually targeting. ::: ## Deployment process Deployment to a Service Fabric cluster proceeds as follows (more details provided below): 1. Download the package from the [package repository](/docs/packaging-applications/package-repositories). 1. Extract the package on the Octopus Server to a temporary location. 1. Any configured or packaged `PreDeploy` scripts are executed. 1. [Substitute variables in templates](/docs/projects/steps/configuration-features/substitute-variables-in-templates) (if configured). 1. [.NET XML configuration transformations](/docs/projects/steps/configuration-features/configuration-transforms) (if configured) are performed. 1. [.NET XML configuration variables](/docs/projects/steps/configuration-features/xml-configuration-variables-feature) (if configured) are replaced. 1. [Structured configuration variables](/docs/projects/steps/configuration-features/structured-configuration-variables-feature) (if configured) are replaced. 1. Any configured or package `Deploy` scripts are executed. 1. Generic variable substitution is carried out across all `*.config` and `*.xml` files in the extracted package. 1. Execute the Service Fabric application deployment script (see [Customizing the deployment process](#customizing-the-deployment-process) section below). 1. Any configured or packaged `PostDeploy` scripts are executed. ### Extract the Service Fabric package Service Fabric package files are extracted during deployment, as the `Publish-UpgradedServiceFabricApplication` cmdlet used by Calamari requires an `ApplicationPackagePath` parameter to the extracted package. This also allows Octopus to use available features such as .NET Configuration Transforms and Variable Substitution. Setting the `Octopus.Action.ServiceFabric.LogExtractedApplicationPackage` variable to `true` will cause the layout of the extracted package to be written into the Task Log. This may assist with finding the path to a particular file. ### Customizing the deployment process The deployment is performed using a PowerShell script called `DeployToServiceFabric.ps1`. If a file with this name exists within the root of your package, Octopus will invoke it. Otherwise, Octopus will use a bundled version of the script as a default. You can **[view the bundled script here](https://github.com/OctopusDeploy/Sashimi.AzureServiceFabric/blob/main/source/Calamari/Scripts/DeployAzureServiceFabricApplication.ps1)**, and use it as a basis for creating your own custom deployment script. :::div{.hint} If you choose to override the deployment script, remember that your `DeployToServiceFabric.ps1` file must exist at **the root** of your package. It cannot be located in a subfolder. For reference, you can see how this filename is detected in your extracted package [here](https://github.com/OctopusDeploy/Sashimi.AzureServiceFabric/blob/main/source/Calamari/Behaviours/DeployAzureServiceFabricAppBehaviour.cs). ::: ## Deploying to multiple geographic regions When your application is deployed to more than one geographic region, you are likely to need per-region configuration settings. You can achieve this by creating a [Service Fabric Deployment Target](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets) per-region and assigning them to the same target tag and an appropriate environment. Your process can be modified by using [variables scoped](/docs/projects/variables/getting-started/#scoping-variables) by environment or deployment target. You can also employ an *environment-per-region* method so you can leverage [lifecycles](/docs/releases/lifecycles) to create a strict release promotion process. Both methods allow you to modify your deployment process and variables per-region, but have slightly different release promotion paths. Choose the one that suits you best. ## Versioning To learn more about how you can automate Service Fabric versioning with Octopus, see our guide on [Version Automation with Service Fabric application packages](/docs/deployments/azure/service-fabric/version-automation-with-service-fabric-application-packages). ## Troubleshooting Due to the complexity of the PowerShell deployment script, it's likely you'll run into unsupported actions or unforeseen edge cases. The most common type of errors are related to the wrong action type chosen by the script due to either unforeseen edge cases or unsupported cases. For this reason, we highly recommend using [a customized version of the PowerShell script](/docs/deployments/azure/service-fabric/deploying-a-package-to-a-service-fabric-cluster/#customizing-the-deployment-process) that comes with Visual Studio for Service Fabric for most scenarios. :::div{.hint} Octopus will not modify the service fabric script due to the complexity associated with the script and the number of combinations it supports. We are considering options to improve this experience in the future, and this will most likely require customers to include/bundle their own version of the PS script. ::: ### Application name already exists When the `RegisterAndCreate` is used when the type and name already exists, you may be presented with the following error: ``` An application with name 'fabric:/name' already exists, its Type is 'TypeName' and Version is 'version'. You must first remove the existing application before a new application can be deployed or provide a new name for the application. ``` This usually relates to the `Override Upgrade Behavior` setting being incorrect. We suggest you either change the setting or use a custom SF deployment script such as [this](https://github.com/OctopusDeploy/Calamari/blob/4a7a5d2b571246181701e743939f635905ef5d84/source/Calamari.Azure/Scripts/DeployAzureServiceFabricApplication.ps1) (preferred). ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # Scripts in packages Source: https://octopus.com/docs/deployments/custom-scripts/scripts-in-packages.md When deploying a package, you can hook into the deployment process at different stages to perform custom actions. You do this by adding specially named scripts at the root of your package. ## Supported scripts You can add any of the following script files in any of the scripting languages supported by Octopus to your packages: - `PreDeploy.` - `Deploy.` - `PostDeploy.` - `DeployFailed.` Where `` is the appropriate extension for your scripting language of choice. Also note these file names will be case-sensitive on certain operating systems. Octopus will detect these scripts and invoke them at the appropriate time during the step. Which file you use depends on when you need your custom activity to run; see the section on [what order are conventions run in](/docs/deployments/packages/package-deployment-feature-ordering/) for details. Your scripts can do anything your scripting language supports, as well as setting [output variables](/docs/projects/variables/output-variables/) and [collecting artifacts](/docs/projects/deployment-process/artifacts). ## Supported steps The following steps have been designed to support running scripts; either at the root of a package, [inline](#scripts-in-package-steps) or both: - Deploy to IIS - Deploy a Windows Service - Deploy a Package - Deploy an Azure Web App - Deploy an Azure App Service - Deploy an Azure Resource Manager template - Deploy a Service Fabric App - Deploy to NGINX - Deploy Java Archive - Deploy a VHD image - Deploy to Tomcat via Manager - Deploy to Wildfly or EAP - Upgrade a Helm Chart However, not all package steps support script hooks. As a general rule, any of the [built-in step templates](/docs/projects/built-in-step-templates/) or [community step templates](/docs/projects/community-step-templates/) that have the `Custom Deployment Scripts` feature available in the [configuration features](/docs/projects/steps/configuration-features) dialog support script hooks: :::div{.hint} **Note:** The `Custom Deployment Scripts` feature only needs to be enabled if you want to [define your scripts inline](#scripts-in-package-steps) instead of executing scripts at the root of a package. ::: :::figure ![Custom Deployment scripts features screenshot](/docs/img/deployments/custom-scripts/scripts-in-packages/custom-deployment-scripts-feature.png) ::: ## Including the scripts in the package 1. Create the scripts you want Octopus to execute during the step. 2. Name each script to match the naming convention depending when you want the script to execute. 3. Include these scripts at the root of your package. Octopus does not search subdirectories. ## Running a script when a step fails You can create a file named `DeployFailed.`, which will be invoked if the step fails. ## How Octopus executes your scripts At each stage during the deployment, Octopus will look for a scripts matching the current stage, and execute the first matching script it finds ordered by a platform-specific priority (see [cross-platform support](#cross-platform-support)). 1. Octopus extracts the package to new uniquely named directory. **This becomes the current working directory.** 2. Octopus does some work, then executes `PreDeploy.` in the current working directory. 3. Optional: If you are using the [custom installation directory feature](/docs/projects/steps/configuration-features/custom-installation-directory), Octopus will copy the contents of the current working directory to the custom installation directory. **This becomes the current working directory.** 4. Octopus does some work, then executes `Deploy.` in the current working directory. 5. Octopus does some work, then executes `PostDeploy.` in the current working directory. For more details see [how packages are deployed](/docs/deployments/packages/) and [what order are conventions run in](/docs/deployments/packages/package-deployment-feature-ordering). ### Cross-platform support {#cross-platform-support} If you are deploying the same package to multiple platforms, you can: 1. Use a single scripting language common to all platforms. Octopus will run the single script using the same scripting runtime on all platforms. 2. Use the scripting language most native to each platform. Octopus will run the most appropriate script for each platform using a platform-specific priority order. The platform-specific priority order Octopus uses to select scripts is: - Linux: Bash, Python, C#, F#, PowerShell - Windows: PowerShell, Python, C#, F#, Bash Example: You are deploying an application to both Windows and Linux. You can write a single `PreDeploy.py` python script, making sure the python runtime is installed on both platforms. Alternatively, you can write both `PreDeploy.sh` and `PreDeploy.ps1`, and Octopus will run the Bash script on Linux and the PowerShell script on Windows. ## Disabling this convention You can prevent Octopus from automatically running scripts in packages by adding the `Octopus.Action.Package.RunScripts` variable to your project and setting it to `false`. You can scope the value of this variable to suit your needs. ## Defining your scripts inline {#scripts-in-package-steps} Rather than embed scripts in packages, you can also define scripts within the step definition using the Octopus user interface. This is a feature that can be enabled on certain steps by clicking **CONFIGURE FEATURES** and selecting **Custom Deployment Scripts**. When enabled, you will see **Custom Deployment Scripts** under the features section of the process definition. ## Troubleshooting Make sure the scripts are located in the root of your package. Make sure the scripts are actually included in your package. Extract your package and inspect the contents to make sure the scripts are included as you expect. For example, if you are using OctoPack for an ASP.NET web application, you'll need to make sure the file is marked as **Build Action = Content**. :::figure ![](/docs/img/deployments/custom-scripts/scripts-in-packages/3277766.png) ::: If you are using OctoPack to package a Windows Service or console application, set **Copy to Output Directory** = **Copy if newer**. :::figure ![](/docs/img/deployments/custom-scripts/scripts-in-packages/3277765.png) ::: Read more about [using OctoPack](/docs/packaging-applications/create-packages/octopack). If the scripts in your package are still not running, make sure someone has not set a project variable called `Octopus.Action.Package.RunScripts` to `false` for the step where the scripts should run. # Common patterns Source: https://octopus.com/docs/deployments/databases/common-patterns.md Databases are the lifeblood of most applications. One missing column can bring down an entire application. It is common for us to see companies approach database deployments with a crawl-walk-run thought process. In doing so we have identified some common patterns that are detailed in this section. ## Manual approvals Most database tooling provides the ability to create a *what-if* report. Octopus Deploy can take that report and upload it as an artifact that DBAs, database developers, or anyone else can review and approve. This section walks through some common techniques for notifications, approvals, and process in general. Learn more about [manual approvals](/docs/deployments/databases/common-patterns/manual-approvals). ## Automatic approvals Manual approvals are a great starting point, when the number of projects that require approval is low. The number of notifications will exponentially grow as time goes on. It is common for the frequency of deployments to go from once a quarter to once a week, and it is important for the signal-to-noise ratio to remain high. Having a DBA spend time approving minor stored procedure changes is not productive. This section shows you how to take the manual approval process and add logic for automated approvals. Learn more about [automatic approvals](/docs/deployments/databases/common-patterns/automatic-approvals). ## Ad-hoc data change scripts Sometimes an application causes data to get into an odd state, but the bug can be hard to reproduce and the priority to fix the bug might be low. However, the data still needs to be fixed. This is where an ad-hoc data change script can be used to fix a specific record in a specific database in a specific environment. Learn more about [ad-hoc data change scripts](/docs/deployments/databases/common-patterns/adhoc-data-changes) ## Backups and rollbacks Most database deployment tooling wraps everything in a transaction. The entire changeset goes or nothing goes. However, we have encountered companies who also want to take a backup of the database prior to any changes being applied. If something goes wrong, then the process should automatically roll everything back. In our experience, that is very dangerous and rife with a lot of *what-if* scenarios. We recommend rolling forward or making your database changes backward compatible. Learn more about [automatic backups and rollbacks](/docs/deployments/databases/common-patterns/backups-rollbacks). ## Learn more - [Database blog posts](https://octopus.com/blog/tag/database-deployments/1) # Ad-hoc data change scripts Source: https://octopus.com/docs/deployments/databases/common-patterns/adhoc-data-changes.md Sometimes an application causes data to get into an odd state, but the bug can be hard to reproduce and the priority to fix the bug might be low. However, the data still needs to be fixed. It might only be one record in one environment in one database, and it doesn't make sense to send a script to fix the data through the standard automated database deployment pipeline. The majority of the time, the fix is a manual process and varies from company to company. It could be as simple as emailing the script to a DBA to as complicated as submitting a lengthy request form. Just like database deployments, it is possible to automate this. Automation has multiple advantages over a manual process. 1. A consistent set of business rules can be applied. For example, no schema changes, and only insert or update statements are allowed. 2. The script can run through an auto-approval script. The auto-approval ensures the rules are followed. It can also run the script in a transaction and roll it back. If the script changes more than a set number of rows, for instance 10, then a DBA must look at it. 3. An automated process is faster, and it frees up the people running those scripts to do more meaningful work. 4. The process can also send out notifications with an audit trail that is easier to search through than email. ## Leveraging runbooks for ad-hoc data change scripts [Runbooks](/docs/runbooks) were added to Octopus Deploy in version: **2019.11**. Runbooks provide an excellent way to run ad-hoc data change scripts. Runbooks don't require a release to be created, but they still have the same functionality as a typical Octopus Deployment, such as prompted variables and auditing. We typically find this process is a good starting point: 1. The runbook run is created, and the script to run and the database information is provided via [prompted variables](/docs/projects/variables/prompted-variables). 2. The script to run is analyzed for any schema change commands, and it is run and immediately rolled back in a transaction. 1. If no schema change commands are found, the script ran successfully, and it updated less than X number of rows then a DBA Approval Required [output variable](/docs/projects/variables/output-variables) is set to `False`. 2. If any of those conditions fail, then the DBA Approval Required [output variable](/docs/projects/variables/output-variables) is set to `True`. 3. Notify the approvers when that DBA Approval Required [output variable](/docs/projects/variables/output-variables) is `True` using [run conditions](/docs/projects/steps/conditions/#run-condition). 4. Pause for a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals/) when that DBA Approval Required [output variable](/docs/projects/variables/output-variables) is `True` using [run conditions](/docs/projects/steps/conditions/#run-condition). 5. Run the script on the desired database. 6. Notify the DBAs and the person who submitted the script that the script has finished running. :::figure ![A sample ad-hoc script process](/docs/img/deployments/databases/common-patterns/images/adhoc_scripts_process.png) ::: For the example process, only the database name and script are prompted variables. The prompted variables allow a person to enter values prior to running the runbook. In this example, a `create table` command is also included in the data changes: :::figure ![The prompted variables for the ad-hoc script process](/docs/img/deployments/databases/common-patterns/images/adhoc_scripts_submit.png) ::: The auto-approval script leverages the [write highlight](/docs/deployments/custom-scripts/logging-messages-in-scripts) command so important messages are shown on the task summary screen. The `create table` command was detected, requiring a DBA to approve the script. The DBA has a choice to accept the script or reject it. There are some cases when a create table is necessary, for example, creating a temporary table: :::figure ![Ad hoc script requires approval](/docs/img/deployments/databases/common-patterns/images/adhoc_approval_required.png) ::: In another example, the same script, without the `create table` command is submitted. This time it passes the auto-approval and is immediately executed: :::figure ![Task progress for the ad-hoc script](/docs/img/deployments/databases/common-patterns/images/adhoc_auto_approval.png) ::: You can view this example on our [samples instance](https://samples.octopus.app/app#/Spaces-106/projects/ad-hoc-data-change-scripts/operations/runbooks/Runbooks-225/overview). # SQL Server DACPAC deployment Source: https://octopus.com/docs/deployments/databases/sql-server/dacpac.md Starting with SQL Server 2008, Microsoft introduced a new project type called Database Projects. These projects use the [state-based approach](https://octopus.com/blog/sql-server-deployment-options-for-octopus-deploy) to applying changes to your database. Initially, Database Projects were not available as part of the initial Visual Studio install and had to be downloaded separately. This download was referred to as SQL Server Data Tools (SSDT) and included project types for Database projects, SQL Server Reporting Services (SSRS) projects, and SQL Server Integration Services (SSIS) projects. Modern versions of Visual Studio have SSDT available to choose when installing or modifying an existing installation. ## Installing SSDT for Visual Studio For earlier versions of Visual Studio such as 2015 and below, installing the SSDT was a matter of locating the download for your version of Visual Studio. Microsoft has provided a convenient way of finding the appropriate download on [this page](https://docs.microsoft.com/en-us/sql/ssdt/previous-releases-of-sql-server-data-tools-ssdt-and-ssdt-bi?view=sql-server-ver15). For more modern versions of Visual Studio (2017+), checkout [Microsoft's installation instructions](https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-ver15) :::div{.success} This guide uses Visual Studio 2019 ::: ## Connect the project to the database With SSDT for Visual Studio installed you can connect the project to the database with the following steps. First, we create the project: 1. Navigate to the **Other Toolsets** category. 2. Click the **Data storage and processing** option. 3. Select **SQL Server Database Project** and click **Next**. 4. Enter the project name and click **Create**. The project has been created, now we connect it to a database. This example uses a pre-existing database called OctoFXDemo: 1. Right-click the project name, then click **Import ➜ Database**. 2. Click **Select Connection**. 3. Add the **Server Name** and select the type of authentication. In this screenshot, a SQL Account is used to connect to the database server. :::figure ![Connection details for the database](/docs/img/deployments/databases/sql-server/images/visual-studio-2019-connect-database.png) ::: 4. Click **Connect** and then click **Start** to import the database. Importing the database will populate your project with the existing objects from the database. You will see a summary of the importing process: :::figure ![Summary of the database import process](/docs/img/deployments/databases/sql-server/images/visual-studio-2019-connect-database-import-complete.png) ::: The project is now ready for creating database schema objects (tables, views, stored procedures, etc...) ## Compare the project to the database schema When the project has some objects, we can compare the project to the target database. 1. Right-click on the project and choose **Schema Compare...**. 2. Select the target database connection by clicking **Select Target ➜ Select Connection**, and select the connection. 3. Click **Compare**. Visual studio will now compare the project to the database and list the steps it will take during a deployment: :::figure ![The results of the Schema Compare in Visual Studio](/docs/img/deployments/databases/sql-server/images/visual-studio-2019-project-schema-compare-results.png) ::: :::div{.hint} For databases that have a dependency on other databases, it is possible to add a reference to another database project. This should be done with caution to avoid circular dependencies with each database depending on each other, as this will result in neither database project compiling. ::: ## Build definition You can use most build servers to build the SQL Server Database project, you just need to install the Visual Studio build tools for the version of Visual Studio that you're using on the build agent. This guide uses Azure DevOps as the build platform, but any build server can do this. ### Create the build definition To create the build definition, take the following steps: :::div{.warning} Note, this example uses the classic editor without YAML. ::: 1. From the Azure DevOps repo, click **Pipelines ➜ New Pipeline**. 2. Select **Empty job** to start. 3. Choose a build pool, then click on the **+** to add a step to the build definition. 4. Click on the Build category and scroll down to **Visual Studio build**. :::div{.hint} An MSBuild task will accomplish the same thing ::: 5. Add `/p:OutDir=$(build.StagingDirectory)` to the MSBuild Arguments so that the built artifacts are separated from the source code. :::figure ![MSBuild arguments](/docs/img/deployments/databases/sql-server/images/azure-devops-build-visual-studio-arguments.png) ::: 6. Click on the **+**, select **Package**, and select **Package Application for Octopus**. :::div{.hint} The Octopus Deploy extension is available in the Marketplace, install the extension if you haven't already done so. ::: 7. Add the properties for the task: - **Package ID**: Give the package a meaningful name. - **Package Format**: Chose whichever package type you wish. - **Package Version**: Use the build server build number to associate a package version back to a build number. - **Source Path**: This will be the same path as what we set the MSBuild argument to, `$(build.StagingDirectory)`. - **Output Path**: Location to store the created package. :::div{.hint} For Azure DevOps, the build number can be formatted on the Options tab under Build number format. This guide uses the format `$(Year:yyyy).$(Month).$(DayOfMonth).$(Rev:r)`. ::: 8. Expand the Advanced Options section and add: - **Include**: The only file we need for deployment is the .dacpac itself. Add the filename here, this example uses `OctoFXDemo.dacpac`. 9. The final step in the definition pushes the package to a repository. This guide uses Octopus Deploy's built-in package repository. Click on the **+**, select **Package**, and select **Push Package(s) to Octopus**. 10. Next, create a connection to the Octopus Server, by clicking **+ New** and add the connection details, then click **OK**. 11. Select the space in your Octopus instance to push to from the drop-down menu. 12. Enter the package(s) that you would like pushed to the Octopus repository and the individual packages or use wildcard syntax: 1. Individual packages, for instance, `$(build.StagingDirectory)\OctoFXDemo.dacpac.$(Build.BuildNumber).nupkg` 2. A wildcard `$(build.StagingDirectory)\*.nupkg`. Queue the build to push the artifact to the Octopus Server: :::figure ![](/docs/img/deployments/databases/sql-server/images/azure-devops-build-successful.png) ::: ## Create the Octopus Deploy project Now that the build server has been configured to push the artifact to the Octopus Server, we need to create a project in Octopus deploy to deploy the package. 1. From the Octopus Web Portal, click the **Projects** tab. 2. Select the Project Group and click the **ADD PROJECT**. 3. Give the project a unique name, a description (optional) , select the Project Group and the Lifecycle, and click **SAVE**. ### Define the project variables 1. Click **Variables** from the project's overview screen. 2. Define the following variables: - `Project.SQLServer.Name` - `Project.SQLServer.Admin.User.Name` (optional) - `Project.SQLServer.Admin.User.Password` (optional) - `Project.Database.Name` - `Project.DACPAC.Name` It is considered best practice to namespace your variables. Doing this helps prevent any variable name conflicts from variable sets or step template variables. Prefixing `Project.` to the front indicates that this is a project variable. :::div{.hint} If you're using Integrated Authentication with Windows, you do not need either of the `Project.SQLServer.Admin*` variables. ::: :::figure ![The project variables in the Octopus Web Portal](/docs/img/deployments/databases/sql-server/images/octopus-project-variables.png) ::: Note, both `Project.SQLServer.Admin.Password` and `Project.SQLServer.Name` have multiple variables that are scoped to different environments. Learn more about [scoping variables](/docs/projects/variables/getting-started/#scoping-variables). ### Define the deployment process With variables defined, we can add steps to our deployment process. 1. Click the **Process** tab. 2. Click **ADD STEP**. 3. Search for `dacpac` steps, select the **SQL - Deploy DACPAC using SqlPackage** step, and enter the following details: - **DACPACPackageName**: The name of the dacpac file. The `Project.DACPAC.Name` variable was created for this field. - **Publish profile name**: Complete this field if you use Publish profiles. - **Report**: True. - **Script**: True. - **Deploy**: False. - **Target Servername**: `Project.SQLServer.Name` variable. - **Target Database**: `Project.Database.Name` variable. - **Authentication type**: Choose the authentication for your use case. - **Username**: `Project.SQLServer.Admin.User.Name` variable (used only with SQL Authentication type). - **Password**: `Project.SQLServer.Admin.User.Password` variable (used only with SQL Authentication type). - **DACPAC Package**: The package from the repository, OctoFXDemo.dacpac for this guide. - **Command Timeout**: Override the default script execution timeout. - **SqlPackage executable location**: If you have the sqlpackage.exe installed, specify the location, otherwise, leave blank to dynamically download it. - **Additional arguments**: Any additional sqlpackage.exe arguments not provided by the template. 4. Add a manual intervention step, scoped to production, so the report from the previous step can be examined before deploying to production. 5. Add another **SQL - Deploy DACPAC using SqlPackage** step, and change the Report and Script values to `False`, and the Deploy value to `True`. The deployment process should look like this: :::figure ![](/docs/img/deployments/databases/sql-server/images/octopus-project-steps.png) ::: ### Create and deploy a release 1. Create a release by clicking on the **CREATE RELEASE** button. 2. Click **SAVE**. 3. Click the **DEPLOY TO DEVELOPMENT** button. 4. Finally, click **DEPLOY**. The results will look like: :::figure ![](/docs/img/deployments/databases/sql-server/images/octopus-project-deploy-complete.png) ::: The first part of this process gathers the changes and creates two [artifacts](/docs/projects/deployment-process/artifacts), an XML file that reports which objects will be changed and the script it will use to apply those changes. The deployment (deploy DACPAC) uses that generated script and applies it to the target so the database matches the desired state. # Blue-green deployments in IIS Source: https://octopus.com/docs/deployments/patterns/blue-green-deployments-with-octopus/blue-green-deployments-in-iis.md With some custom scripting you can achieve reduced downtime deployments in IIS on a single server, without the need for an external load-balancer. This might help if you only deploy a single instance of your application, or you cannot control the load-balancer itself but you can control IIS and your deployments. In this case the blue/green are not separate environments, they are different web site and application pool instances, but the basic premise is exactly the same. The general idea is: 1. Deploy a new instance of your web application and warm it up. 2. Use an on-server reverse-proxy to seamlessly switch new incoming requests to the new instance. 3. Delete the old instance once it has finished processing outstanding requests. :::div{.hint} **A reverse-proxy or some kind of router is required** Changing the configuration of a web site in IIS (like physical path or bindings) **always** results in the application pool being recycled. The default [IIS websites and application pools](/docs/deployments/windows/iis-websites-and-application-pools) step in Octopus will try to reuse an existing web site in IIS (or create one for you), and as the last step it will [update the physical path in IIS](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Scripts/Octopus.Features.IISWebSite_BeforePostDeploy.ps1). This causes a minimum of downtime, especially if you have [allowed overlapping rotation on your application pool](https://msdn.microsoft.com/en-us/library/microsoft.web.administration.applicationpoolrecycling.disallowoverlappingrotation(v=vs.90).aspx). However, to achieve truly zero-downtime deployments of IIS Web Applications, you must use a reverse-proxy or some kind of routing technology. ::: ## General steps for zero-downtime deployments in IIS :::div{.hint} Every scenario is slightly different which is why this page is written more as a general guide than a step-by-step walk-through. This rough example should provide a strong starting point to reduce downtime for your deployments to IIS. ::: The general steps for this kind of deployment would be: 1. Use a [custom script](/docs/deployments/custom-scripts) step to calculate a new port number so we can configure a binding you can use to warm up the new instance of your application. See this [blog post](https://octopus.com/blog/changing-website-port-on-each-deployment) for more details. * The new port number should end up in a variable like `#{Octopus.Action[Calculate port number].Output.Port}`. 2. Use the [IIS Websites and Application Pools](/docs/deployments/windows/iis-websites-and-application-pools) step to deploy a new instance of your web application into a new Web Site and Application Pool. * Use an expression like `MyApp-#{Octopus.Release.CurrentForEnvironment.Number}` for the Web Site Name and Application Pool Name. * Configure a binding to `http://localhost:#{Octopus.Action[Calculate port number].Output.Port}`. 3. Make sure your new instance is warmed up and completely ready to process requests. * This may involve making some requests to the localhost binding you configured earlier. 4. Start routing new requests to the new instance. * You might decide to perform on-the-fly reconfiguration of your on-server reverse-proxy (ARR, IIS Web Farm, NGINX, etc). * Alternatively you might configure the on-server reverse-proxy to use health checks to determine which instances are able handle requests. 5. Use a custom script step to delete the old Web Site and Application Pool. * The name for the previous instance can be calculated by an expression like this: `MyApp-#{Octopus.Release.PreviousForEnvironment.Number}`. * You may want to wait for outstanding web requests to finish processing using something like this: `Get-Item IIS:\AppPools\MyApp-#{Octopus.Release.PreviousForEnvironment.Number} | Get-WebRequest`. ### Using application request routing (ARR) You can achieve this kind of result by using [ARR](https://www.iis.net/downloads/microsoft/application-request-routing) as a reverse proxy to your Web Site. You will need to configure a [Web Farm](https://www.iis.net/learn/web-hosting/scenario-build-a-web-farm-with-iis-servers/overview-build-a-web-farm-with-iis-servers) in IIS and use ARR to route requests to the Web Farm. You can then choose how you want to switch between active instances of your application. [Kevin Reed](https://kevinareed.com/) has written a nice blog post on how he achieves [Blue/Green deployments using ARR](https://kevinareed.com/2015/11/07/how-to-deploy-anything-in-iis-with-zero-downtime-on-a-single-server/). ### Using NGINX You can achieve this kind of result using an NGINX server as a reverse proxy to your Web Site. The latest versions of NGINX provide easier support for [on-the-fly reconfiguration](https://www.nginx.com/products/on-the-fly-reconfiguration/). ## Learn more - [View Blue/Green deployment examples on our samples instance](https://oc.to/PatternBlueGreenSamplesSpace). - [Blue/Green deployment knowledge base articles](https://oc.to/BlueGreenTaggedKBArticles). - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Multi-region deployment pattern Source: https://octopus.com/docs/deployments/patterns/multi-region-deployment-pattern.md ## Scenario Your application is deployed to multiple geographic regions (or multiple Data Centers) to provide for your end-customer's performance (think latency) or legal requirements (like data sovereignty). :::figure ![](/docs/img/deployments/patterns/images/5865791.png) ::: ## Strict solution using environments You can use [Environments](/docs/infrastructure/environments) to represent each region or data center. In the example below we have defined a Dev and Test Environment as per normal, and then configured two "production" Environments, one for each region we want to deploy into. :::figure ![](/docs/img/deployments/patterns/images/multi-tenant-region.png) ::: By using this pattern you can: 1. Use [lifecycles](/docs/releases/lifecycles) to define a strict process for promotion of releases between your regions. *Lifecycles can be used to design both simple and complex promotion processes.* * For example, you may want to test releases in Australia before rolling them out to the USA, and then to Europe. * In another example, you may want to test releases in Australia before rolling them out simultaneously to all other regions. 2. Scope region-specific variables to the region-specific Environments. 3. Quickly see which releases are deployed to which regions on the main dashboard. 4. Quickly promote releases through your regions using the Project Overview. 5. Use [Scheduled Deployments](/docs/releases/#scheduling-a-deployment) to plan deployments for times of low usage. :::div{.success} Environments and Lifecycles are a really good solution if you want to enforce a particular order of deployments through your regions. ::: ## Rolling solution [Cloud Regions](/docs/infrastructure/deployment-targets/cloud-regions/) enable you to configure [rolling deployments](/docs/deployments/patterns/rolling-deployments-with-octopus) across your regions or data centers. In this case you can scope variables to the Cloud Regions and deploy to all regions at once, but you cannot control the order in which the rolling deployment executes. :::figure ![](/docs/img/deployments/patterns/images/production.png) ::: By using this pattern you can: 1. Scope region-specific variables to the Cloud Region targets. 2. Conveniently deploy to all regions at the same time. :::div{.success} If you don't mind which order your regions are deployed, or you always upgrade all regions at the same time, Cloud Regions are probably the right fit for you. ::: ## Tenanted solution Alternatively you could create [Tenants](/docs/tenants) to represent each region or data center. By doing so you can: 1. Use [variable templates](/docs/projects/variables/variable-templates) to prompt you for the variables required for each region (like the storage account details for that region) and when you introduce a new region Octopus will prompt you for the missing variables: ![](/docs/img/deployments/patterns/images/australiavariables.png) 2. Provide logos for your regions to make them easier to distinguish: ![](/docs/img/deployments/patterns/images/tenantlogs.png) 3. Quickly see the progress of deploying the latest release to your entire production environment on the main dashboard: ![](/docs/img/deployments/patterns/images/dashboard.png) 4. Quickly see which releases have been deployed to which regions using the Dashboard and Project Overview: ![](/docs/img/deployments/patterns/images/projectdashboard.png) 5. Quickly promote releases to your production regions, in a particular sequence, or simultaneously: ![](/docs/img/deployments/patterns/images/projectdashboardrelease.png) 6. Use [Scheduled Deployments](/docs/releases/#scheduling-a-deployment) to plan deployments for times of low usage: ![](/docs/img/deployments/patterns/images/scheduleddeployment.png) You do give up the advantage of enforcing the order in which you deploy your application to your regions, but you gain the flexibility to promote to your regions in different order depending on the circumstances. :::div{.success} Tenants offer a balanced approach to modeling multi-region deployments, offering a measure of control and flexibility. ::: ## Conclusion [Environments](/docs/infrastructure/environments/), [Tenants](/docs/tenants/) and [Cloud Regions](/docs/infrastructure/deployment-targets/cloud-regions) can be used to model multi-region deployments in Octopus, but each different choice is optimized to a particular style of situation. Choose the one that suits your needs best! ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Terraform step configuration with Octopus Source: https://octopus.com/docs/deployments/terraform/working-with-built-in-steps.md Octopus provides four built-in step templates for managing and interacting with your Terraform code: - `Apply a Terraform template` - `Destroy Terraform resources` - `Plan to apply a Terraform template` - `Plan a Terraform destroy` :::figure ![Built-in Terraform step badges](/docs/img/deployments/terraform/working-with-built-in-steps/images/terraform-step-badges.png) ::: All four of the built-in Terraform steps provide common configuration points you can use to control how the steps execute your Terraform code. :::div{.hint} While these are the options common to each step, there are additional ways to interact and extend these steps, specifically using [Terraform plan outputs](/docs/deployments/terraform/plan-terraform/#plan-output-format) and [Terraform output variables](/docs/deployments/terraform/terraform-output-variables) ::: ## Managed Accounts You can optionally prepare the environment that Terraform runs in using the details defined in [accounts](/docs/infrastructure/accounts) managed by Octopus. If an account is selected then those credentials do not need to be included in the Terraform template. :::div{.hint} Using credentials managed by Octopus is optional, and credentials defined in the Terraform template take precedence over any credentials defined in the step. ::: ## Template section The Terraform template can come from three sources: - Directly entered source code - Files in a package - Files in a Git repository - *New!* ### Source code The first option is to paste the template directly into the step. This is done by selecting the `Source code` option, and clicking the `ADD SOURCE CODE` button. :::figure ![Source Code](/docs/img/deployments/terraform/working-with-built-in-steps/images/step-aws-sourcecode.png) ::: This will present a dialog in which the Terraform template can be pasted, in either JSON or HCL. :::figure ![Source Code Dialog](/docs/img/deployments/terraform/working-with-built-in-steps/images/step-aws-code-dialog.png) ::: Once the `OK` button is clicked, the input variables defined in the template will be shown under the `Variables` section. :::figure ![Parameters](/docs/img/deployments/terraform/working-with-built-in-steps/images/step-parameters.png) ::: Terraform variables are either strings, lists or maps. Strings (including numbers and `true`/`false`) are supplied without quotes. For example `my string`, `true` or `3.1415`. Lists and maps are supplied as raw HCL or JSON structures, depending on the format of the template. For example, if the template is written in HCL, a list variable would be provided as `["item1", {item2="embedded map"}]` and a map variable would be provided as `{item1="hi", item2="there"}`. If the template is written is JSON, a list variable would be provided as `["item1", {"item2": "embedded map" }]` and a map variable would be provided as `{"item1": "hi", "item2": "there"}`. ### Package The second option is to use the files contained in a package. This is done by selecting the `File inside a package` option, and specifying the package. The contents of the package will be extracted, and Terraform will automatically detect the files to use. See the [Terraform documentation](https://www.terraform.io/docs/configuration/load.html) for more details on the file load order. You can optionally run Terraform from a subdirectory in the package by specifying the path in the `Terraform template directory` field. The path must be relative (i.e. without a leading slash). If your package has the Terraform templates in the root folder, leave this field blank. :::div{.hint} Given that Terraform templates and variable files are plain text, you may find it convenient to use the GitHub Repository Feed to provide the packages used by Terraform steps. Using GitHub releases means you do not have to manually create and upload a package, and can instead tag a release and download it directly from GitHub. ::: :::figure ![Package](/docs/img/deployments/terraform/working-with-built-in-steps/images/step-aws-package.png) ::: ### Git repository :::div{.info} Octopus version `2024.1` added support for Terraform files stored in Git repositories. You can find more information about this feature in this [blog post on using Git resources directly in deployments](https://octopus.com/blog/git-resources-in-deployments). ::: The third option is to use files container in a Git repository. This can streamline your deployment process by reducing the amount of steps required to get them into Octopus as you no longer need to package the files up and put them into a feed. To configure Terraform steps to use a Git repository, select the `Git Repository` option as your Template Source. #### Database projects If you are storing your project configuration directly in Octopus (i.e. not in a Git repository using the [Configuration as code feature](/docs/projects/version-control)), you can source your files from a Git repository by entering the details of the repository directly on the step, including: - URL - Credentials (either anonymous or selecting a Git credential from the Library) When creating a Release, you choose the tip of a branch for your files. The commit hash for this branch is saved to the Release. This means redeploying that release will only ever use that specific commit and not the *new* tip of the branch. #### Version-controlled projects If you are storing your project configuration in a Git repository using the [Configuration as code feature](/docs/projects/version-control), in addition to the option above, you can source your files from the same Git repository as your deployment process by selecting **Project** as the Git repository source. When creating a Release using this option, the commit hash used for your deployment process will also be used to source the files. ### Variable replacements Variable replacement is performed before terraform is executed. When deploying a template from a package or Git repository, all `*.tf`, `*.tfvar`, `*.tf.json` and `*.tfvar.json` files will have variable substitution applied to them by default. You can disable the automatic substitution by deselecting `Replace variables in default Terraform files`. You can also have variable substitution applied to additional files by defining file names in the `Target files` field. For example, if you were deploying from a package and your template file looked like this: ```json provider "aws" { } resource "aws_instance" "example" { ami = "#{AMI}" instance_type = "m3.medium" tags { Name = "My EC2 Instance" } } ``` Then the value from the project variable `AMI` would be substituted for the marker `#{AMI}`. When applying an inline template, the variable fields can also include replacement markers. For example, if a map variable for a HCL template was defined as `{"key" = "value", #{MapValues}}` and the Octopus project had a variable called `MapValues` defined as `"key2" = "value2"`, then the final variable would resolve to `{"key" = "value", "key2" = "value2"}`. See the [variable substitution](/docs/projects/variables/variable-substitutions) documentation for more information. ### Additional variable files The `Additional variable files` option contains a new-line separated list of variable files to use with the deployment. All files called `terraform.tfvars`, `terraform.tfvars.json`, `*.auto.tfvars` and `*.auto.tfvars.json` are automatically loaded by Terraform, and do not need to be listed here. However, you may want to reference environment specific variable files by referencing them with files names built around variable substitution such as `#{Octopus.Environment.Name}.tfvars`. Each line entered into this field will be passed to Terraform as `-var-file ''`. # IIS Websites and application pools Source: https://octopus.com/docs/deployments/windows/iis-websites-and-application-pools.md Configuring IIS is an essential part of deploying any ASP.NET web application. Octopus has built-in support for configuring IIS Web Sites, Applications, and Virtual Directories. 1. From your project's overview page, click **DEFINE YOUR DEPLOYMENT PROCESS**. 1. Click **ADD STEP**, and then select the **Deploy to IIS** step. 1. Give the step a name. 1. Select the package feed and enter the package ID of the package to be deployed. 1. Choose the IIS deployment type: - [Web Site](#IISWebsitesandApplicationPools-DeployIISWebSiteweb-site) - [Virtual Directory](#iiswebsitesandapplicationpools-DeployIISVirtualDirectoryvirtual-directory) - [Web Application](#IISWebsitesandApplicationPools-DeployIISWebApplicationweb-application) Understanding the difference between Sites, Applications, and Virtual Directories is important to understand how to use the IIS Websites and Application Pools features in Octopus. Learn more about [Sites, Applications, and Virtual Directories in IIS](https://www.iis.net/learn/get-started/planning-your-iis-architecture/understanding-sites-applications-and-virtual-directories-on-iis). ## Deploy IIS web site {#IISWebsitesandApplicationPools-DeployIISWebSiteweb-site} You need to fill out the following fields for an IIS Web Site deployment: | Field | Meaning | Examples | Notes | | ------------------------- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- | | **Web Site Name** | The name of the IIS Web Site to create (or reconfigure, if the site already exists). | `MyWebSite` | | | **Physical path** | The physical path on disk this Web Site will point to. | `/Path1/Path2/MySite`
`#{MyCustomInstallationDirectory}` | You can specify an absolute path, or a relative path inside the package installation directory. | | **Application Pool name** | Name of the Application Pool in IIS to create (or reconfigure, if the application pool already exists). | `MyAppPool` | | | **.NET CLR version** | The version of the .NET Common Language Runtime this Application Pool will use. |
  • `v2.0`
  • `v4.0`
| Choose v2.0 for applications built against .NET 2.0, 3.0 or 3.5.
Choose v4.0 for .NET 4.0 or 4.5. | | **Identity** | Which account the Application Pool will run under. |
  • `Application Pool Identity`
  • `Local Service`
  • `Local System`
  • `Network Service`
  • `Custom user (you specify the username/password)`
| | | **Start mode** | Specifies whether the IIS Web Site and/or Application Pool are started after a successful release. |
  • `IIS Application Pool and IIS Web Site`
  • `IIS Application Pool Only`
  • `Do not start either`
| | | **Bindings** | Specify any number of HTTP/HTTPS bindings that should be added to the IIS Web Site. | | | | **Authentication modes** | Choose which authentication mode(s) IIS should enable. |
  • `Anonymous`
  • `Basic`
  • `Windows`
| You can select more than one authentication mode. | ## Deploy IIS virtual directory {#iiswebsitesandapplicationpools-DeployIISVirtualDirectoryvirtual-directory} :::div{.success} The IIS Virtual Directory step requires a parent Web Site to exist in IIS before it runs. You can create a chain of steps like this: 1. Make sure the parent Web Site exists in IIS and is configured correctly. 2. Create any number of Web Applications and Virtual Directories as children of the parent Web Site. ::: You need to fill out the following fields for an IIS Virtual Directory deployment: | Field | Meaning | Examples | Notes | | ------------------------ | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- | | **Parent Web Site name** | The name of the parent IIS Web Site. | `Default Web Site`, `MyWebSite` | The parent Web Site must exist in IIS before this step runs. This step will not create the Web Site for you. | | **Virtual path** | The relative path from the parent IIS Web Site to the Virtual Directory. | If you want a Virtual Directory called `MyDirectory` belonging to the Site `MySite` as part of the Application `MyApplication` you would set the Virtual Path to `/MyApplication/MyDirectory`. | All parent applications/directories must exist. Does not need to match the physical path. | | **Physical path** | The physical path on disk this Virtual Directory will point to. | `/Path1/Path2/MyDirectory`, `#{MyCustomInstallationDirectory}`. | You can specify an absolute path, or a relative path inside the package installation directory. | :::div{.success} The Virtual Path and Physical Path do not need to match which is one of the true benefits of IIS. You can create a virtual mapping from a URL to a completely unrelated physical path on disk. See [below](/docs/deployments/windows/iis-websites-and-application-pools) for more details. ::: :::div{.warning} We use PowerShell to create virtual and physical directories. There is a known limitation with PowerShell which prevents the creation of virtual directories with a leading dot directly under the parent website in IIS. There are two workarounds for this. First, you can manually create a virtual directory on the server using the IIS manager. Alternatively, you can create a physical directory with the same name as your virtual directory's target physical directory where your site or application will be installed. For example, you might create a physical directory in the website installation directory called `.well-known`, and then configure your IIS deployment step to create a virtual directory directly under the parent website directory. This issue has been documented [here](https://github.com/OctopusDeploy/Issues/issues/6586). ::: ## Deploy IIS web application {#IISWebsitesandApplicationPools-DeployIISWebApplicationweb-application} :::div{.success} The IIS Web Application step requires a parent Web Site to exist in IIS before it runs. You can create a chain of steps like this: 1. Make sure the parent Web Site exists in IIS and is configured correctly. 2. Create any number of Web Applications and Virtual Directories as children of the parent Web Site. ::: You need to fill out the following fields for an IIS Web Application deployment: :::div{.success} The Virtual Path and Physical Path do not need to match which is one of the true benefits of IIS. You can create a virtual mapping from a URL to a completely unrelated physical path on disk. See [below](/docs/deployments/windows/iis-websites-and-application-pools) for more details. ::: | Field | Meaning | Examples | Notes | | ------------------------- | ---------------------------------------- | ---------------------------------------- | ---------------------------------------- | | **Parent Web Site Name** | The name of the parent IIS Web Site. | `Default Web Site`, `MyWebSite` | The parent Web Site must exist in IIS before this step runs. This step will not create the Web Site for you. | | **Virtual Path** | The relative path from the parent IIS Web Site to the Web Application. | If you want a Web Application called `MyApplication` belonging to the Site `MySite` you would set the Virtual Path to `/MyApplication`. | All parent applications/directories must exist. Does not need to match the physical path. | | **Physical path** | The physical path on disk this Web Application will point to. | `/Path1/Path2/MyApplication`, `#{MyCustomInstallationDirectory}`. | You can specify an absolute path, or a relative path inside the package installation directory. | | **Application Pool name** | Name of the Application Pool in IIS to create (or reconfigure, if the Application Pool already exists). | | | | **.NET CLR version** | The version of the .NET Common Language Runtime this Application Pool will use. | `v2.0`, `v4.0` | Choose v2.0 for applications built against .NET 2.0, 3.0 or 3.5. Choose v4.0 for .NET 4.0 or 4.5. | | **Identity** | Which account the Application Pool will run under. | `Application Pool Identity`, `Local Service`, `Local System`, `Network Service`, `Custom user (you specify the username/password)` | | ## How Octopus Deploys your web site {#IISWebsitesandApplicationPools-HowOctopusDeploysyourWebSite} Out of the box, Octopus will do the right thing to deploy your Web Site using IIS, and the conventions we have chosen will eliminate a lot of problems with file locks, leaving stale files behind, and causing multiple Application Pool restarts. By default, Octopus will follow the conventions described in [Deploying packages](/docs/deployments/packages/) and apply the different features you select in the order described in [Package deployment feature ordering](/docs/deployments/packages/package-deployment-feature-ordering). :::div{.success} Avoid using the [Custom Installation Directory](/docs/projects/steps/configuration-features/custom-installation-directory) feature unless you are absolutely required to put your packaged files into a specific physical location on disk. ::: Octopus performs the following steps: 1. Acquire the package as optimally as possible (local package cache and [delta compression](/docs/deployments/packages/delta-compression-for-package-transfers)). 2. Create a new folder for the deployment (which avoids many common problems like file locks, leaving stale files behind, and multiple Application Pool restarts). 3. Example: `C:\Octopus\Applications\[Tenant name]\[Environment name]\[Package name]\[Package version]\` where `C:\Octopus\Applications` is the Tentacle application directory you configured when installing Tentacle). 4. Extract the package into the newly created folder. 5. Execute each of your [custom scripts](/docs/deployments/custom-scripts/) and the [deployment features](/docs/projects/steps/configuration-features/) you've configured will be executed to perform the deployment [following this order by convention](/docs/deployments/packages/package-deployment-feature-ordering). 6. As part of this process the IIS Web Site, Web Application, or Virtual Directory will be configured in a single transaction with IIS, including updating the Physical Path to point to this folder. 7. [Output variables](/docs/projects/variables/output-variables/) and deployment [artifacts](/docs/projects/deployment-process/artifacts) from this step are sent back to the Octopus Server. :::div{.success} You can see exactly how Octopus integrates with IIS in the [open-source Calamari library](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Scripts/Octopus.Features.IISWebSite_BeforePostDeploy.ps1). ::: ## How to Take Your Website Offline During Deployment A IIS Website can be taken offline by placing a `app_offline.htm` file into the root directory of the website. The contents of that file will be shown to anyone accessing the site. This is useful if you do not want to users to access the site while the deployment is being performed. It recycles the App Pool, releasing any file locks the site may have. This can be done by including an `app_online.htm` file in your website and then renaming it to `app_offline.htm` at the start of the deployment. This can be done via a script or the `IIS - Change App Offline` step in the [community library](/docs/projects/community-step-templates). ## Learn more - Generate an Octopus guide for [IIS and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=IIS). # Virtual hard drives deployments with Octopus Source: https://octopus.com/docs/deployments/windows/virtual-hard-drive-deployments.md :::div{.warning} The Deploy a VHD step requires the target Machine to be running Windows Server 2012 or newer, and the Tentacle service to have Administrator privileges. ::: Octopus Deploy has built-in support for deploying Virtual Hard Drives. The feature allows you to deploy a package containing a VHD while taking advantage of Octopus features such as variable substitution in configuration files or running .NET configuration transforms on files within the VHD. Octopus can then optionally attach the VHD to an existing Hyper-V virtual machine. ## Adding a VHD step {#add-vhd-step} To deploy a Virtual Hard Drive, add a *Deploy a VHD* step. For information about adding a step to the deployment process, see the [add step](/docs/projects/steps) section. :::figure ![](/docs/img/deployments/windows/images/deploying-virtual-hard-drives-add-step.png) ::: ## Configuring the step {#configure-step} :::figure ![](/docs/img/deployments/windows/images/deploying-virtual-hard-drives-configure-step.png) ::: ### Step 1: Select a package {#select-package} Use the Package Feed and Package ID fields to select the [package](/docs/packaging-applications) containing the Virtual Hard Drive (\*.vhd or \*.vhdx) to be installed. There must be a single VHD in the root of the package. The package may contain deployment scripts and other artifacts required by those scripts, but only a single VHD. ### Step 2: Configure VHD options {#configure-vhd-options} | Field | Meaning | | --------------------------------------- | ---------------------------------------- | | VHD application path | The relative path to your application within your VHD. Octopus will use this to run deployment features, such as config transforms and variable substitution in files, only on this folder, rather than on the entire VHD. Examples: `MyApplication` and `PublishedApps\MyApplication` | | Add VHD to Hyper-V | Attach the VHD to an existing Hyper-V virtual machine. Octopus will shutdown the virtual machine, add the VHD (replacing the current first virtual drive if there is one) then restart the virtual machine. | | Virtual Machine Name | The name of the virtual machine to add the VHD to. | ## Accessing the VHD in deployment scripts {#access-vhd-in-deployment-scripts} When a VHD is deployed the following steps take place: 1. The package is extracted to a newly created folder. 2. The VHD from the package is mounted. The mount point is available in deploy scripts using the `OctopusVhdMountPoint` variable (for example `$OctopusVhdMountPoint` in PowerShell). 3. Any `PreDeploy` scripts in your package are run. 4. Enabled step features such as structured configuration variables, .NET configuration transforms and substituting variables in transforms are run against the package folder and the application path within the mounted VHD. 5. `Deploy` scripts are run. 6. The VHD is unmounted. 7. If enabled, the VHD is attached to a Hyper-V virtual machine. The step waits for the virtual machine to reboot, so you should be able to interact with the running virtual machine in your `PostDeploy` scripts. 8. `PostDeploy` scripts are run. ## VHDs with multiple partitions {#multiple-partitions} If you have a VHD with multiple partitions, in step 2 above all partitions are mounted, and the mount-points are available to your scripts in `OctopusVhdMountPoint_0`, `OctopusVhdMountPoint_1`, etc. The `OctopusVhdMountPoint` variable will contain the mount-point of the first partition that was actually mounted (see below to not mount all partitions). To change the behavior when there are multiple partitions create Octopus Variables against your project indexed to the partition (starting at 0). If you have more than one deploy VHD step you will need to scope the variables to each step. | Octopus Variable | Value | Meaning | | --------------------------------------- | --------------- | ---------------------------------------- | | OctopusVhdPartitions[0].Mount | false | Do not mount this partition | | OctopusVhdPartitions[0].ApplicationPath | A relative path | Override the VHD application path from the "Configure VHD options" section for just this partition | # Create a Project in Octopus Source: https://octopus.com/docs/getting-started/first-deployment/legacy-guide/2022/create-projects.md [Getting Started - Add Projects And Project Groups](https://www.youtube.com/watch?v=gfaRUIlQybA) Projects are used to collect all the assets that make up your deployment processes. To deploy our simple hello world script, we first need a project. :::figure ![The projects page in the Octopus Web Portal](/docs/img/shared-content/concepts/images/projects.png) ::: 1. Navigate to the **Projects** tab, and click **ADD PROJECT**. 1. Give the project a name, for instance, *Hello, world*, and click **Save**. **Optional** By default, Octopus Deploy will store the deployment process, runbook process, and variables in the back-end SQL Server. From **Octopus 2022.1**, you have the option to store the *deployment process* in a git repository. :::div{.hint} The ability to store runbook processes and variables will be added in future versions. ::: To configure the project to use version control: 1. Select the option **Use Version Control for this project** 1. Click **Save and Configure VCS** 1. Enter the git repository URL and credentials. 1. Click **Test** to verify the connection. 1. Click **Save** to save the VCS information. Learn more about [config as code](/docs/projects/version-control). The next step will [define the deployment process](/docs/getting-started/first-deployment/legacy-guide/2022/define-the-deployment-process) in the newly created project. **Further Reading** For further reading on projects in Octopus Deploy please see: - [Projects Documentation](/docs/projects) - [Deployment Documentation](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # First Runbook Run Source: https://octopus.com/docs/getting-started/first-runbook-run.md This tutorial will help you complete your first runbook run using a sample script on one or more of your servers. The only prerequisite is a running Octopus Deploy instance, either in Octopus Cloud or self-hosted. The tutorial will walk through configuring deployment targets. This tutorial will take between **15-25 minutes** to complete, with each step taking between **2-3** minutes to complete. 1. [Configure environments](/docs/getting-started/first-runbook-run/configure-runbook-environments) 1. [Create a project](/docs/getting-started/first-runbook-run/create-runbook-projects) 1. [Create a runbook](/docs/getting-started/first-runbook-run/create-a-runbook) 1. [Define a runbook process to run on workers](/docs/getting-started/first-runbook-run/define-the-runbook-process) 1. [Running a runbook](/docs/getting-started/first-runbook-run/running-a-runbook) 1. [Defining and using runbook variables](/docs/getting-started/first-runbook-run/runbook-specific-variables) 1. [Adding deployment targets](/docs/getting-started/first-runbook-run/add-runbook-deployment-targets) 1. [Update runbook process to run on deployment targets](/docs/getting-started/first-runbook-run/define-the-runbook-process-for-targets) 1. [Publishing a runbook](/docs/getting-started/first-runbook-run/publishing-a-runbook) Before starting the tutorial, if you haven't set up an Octopus Deploy instance, please do so by selecting one of the following options: - [Octopus Cloud](https://octopus.com/free-signup) -> we host the Octopus Deploy instance for you, it connects to your servers. - [Self-hosted on a Windows Server](https://octopus.com/free-signup) -> you host it on your infrastructure by [downloading our MSI](https://octopus.com/download) and installing it onto a Windows Server with a SQL Server backend. Learn more about [our installation requirements](/docs/installation/requirements). - [Self-hosted as a Docker container](https://octopus.com/blog/introducing-linux-docker-image) -> you run Octopus Deploy in a Docker container. You will still need a [free license](https://octopus.com/free-signup). When you have an instance running, go to the [configure runbook environments page](/docs/getting-started/first-runbook-run/configure-runbook-environments) to get started. **Further Reading** This tutorial will run a sample script, first on the default worker or your server; then, it will move onto running that script on your servers. If you prefer to skip that and start configuring Octopus Deploy runbooks to meet your requirements, please see: - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # Create a Project Source: https://octopus.com/docs/getting-started/first-runbook-run/create-runbook-projects.md [Getting Started - Add Projects And Project Groups](https://www.youtube.com/watch?v=gfaRUIlQybA) Projects are used to collect all the assets that make up your deployment processes. To deploy our simple hello world script, we first need a project. :::figure ![The projects page in the Octopus Web Portal](/docs/img/shared-content/concepts/images/projects.png) ::: 1. Navigate to the **Projects** tab, and click **ADD PROJECT**. 1. Give the project a name, for instance, *Hello, world*, and click **Save**. **Optional** By default, Octopus Deploy will store the deployment process, runbook process, and variables in the back-end SQL Server. From **Octopus 2022.1**, you have the option to store the *deployment process* in a git repository. :::div{.hint} The ability to store runbook processes and variables will be added in future versions. ::: To configure the project to use version control: 1. Select the option **Use Version Control for this project** 1. Click **Save and Configure VCS** 1. Enter the git repository URL and credentials. 1. Click **Test** to verify the connection. 1. Click **Save** to save the VCS information. Learn more about [config as code](/docs/projects/version-control). The next step will [create a runbook](/docs/getting-started/first-runbook-run/create-a-runbook) in the newly created project. **Further Reading** For further reading on Runbooks and projects please see: - [Projects](/docs/projects) - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # AWS accounts Source: https://octopus.com/docs/infrastructure/accounts/aws.md To deploy infrastructure to AWS, you can define an AWS account in Octopus. Octopus manages the AWS credentials used by the AWS steps. The AWS account is either a pair of access and secret keys, or the credentials are retrieved from the IAM role assigned to the instance that is executing the deployment. ## Create an AWS account AWS steps can use an Octopus managed AWS account for authentication. There a two different account types you can choose from, Access Keys or OpenID Connect. ### Access Key account See the [AWS documentation](https://oc.to/aws-access-keys) for instructions to create the access and secret keys. 1. Navigate to **Deploy ➜ Manage ➜ Accounts**, click the **ADD ACCOUNT** and select **AWS Account**. 1. Add a memorable name for the account. 1. Provide a description for the account. 1. Enter the **Access Key** and the secret **Key**. 1. Click the **SAVE AND TEST** to save the account and verify the credentials are valid. ### OpenID Connect :::div{.warning} Support for OpenID Connect authentication to AWS requires Octopus Server version 2024.1 ::: To use OpenID Connect authentication you have to follow the [required minimum configuration](/docs/infrastructure/accounts/openid-connect#configuration). See the [AWS documentation](https://oc.to/aws-oidc) for instructions to configure an OpenID Connect identity provider. :::div{.info} **If using the AWS CLI or API to configure the identity provider.** See the [AWS Documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc_verify-thumbprint.html) for instructions on how to obtain the thumbprint of your Octopus Server. ::: When setting up the identity provider you need to use the host domain name of your server as the **Audience** value, as configured under **Configuration->Nodes->Server Uri**. #### Configuring AWS OIDC Account 1. Navigate to **Deploy ➜ Manage ➜ Accounts**, click the **ADD ACCOUNT** and select **AWS Account**. 1. Add a memorable name for the account. 1. Provide a description for the account. 1. Set the **Role ARN** to the ARN from the identity provider associated role. Note that this is different to the ARN of your Identity Provider. 1. Set the **Session Duration** to the Maximum session duration from the role, in seconds. 1. Click **SAVE** to save the account. 1. Before you can test the account you need to add a condition to the identity provider in AWS under **IAM ➜ Roles ➜ {Your AWS Role} ➜ Trust Relationship** : ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::{aws-account}:oidc-provider/{your-identity-provider}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "example.octopus.app:sub": "space:[space-slug]:account:[slug-of-account-created-above]", "example.octopus.app:aud": "example.octopus.app" } } } ] } ``` 1. Go back to the AWS account in Octopus and click **SAVE AND TEST** to verify the credentials are valid. Please read [OpenID Connect Subject Identifier](/docs/infrastructure/accounts/openid-connect#subject-keys) on how to customize the **Subject** value. By default, the role trust policy does not have any conditions on the subject identifier. To lock the role down to particular usages you need to modify the [trust policy conditions](https://oc.to/aws-iam-policy-conditions) and add a condition for the `sub`. For example, to lock an identity role to a specific Octopus environment for an untenanted deployment, you can update the conditions: ```json "Condition": { "StringEquals": { "example.octopus.app:sub": "space:default:project:aws-oidc-testing:environment:dev", "example.octopus.app:aud": "example.octopus.app" } } ``` `default`, `aws-oidc-testing`, and `dev` are the slugs of their respective Octopus resources. The `tenant:` segment is omitted because this deployment has no tenant value — when a selected subject key has no value at runtime, both the key and the value are dropped from the subject. For a tenanted deployment, the subject also includes the tenant slug: ```json "Condition": { "StringEquals": { "example.octopus.app:sub": "space:default:project:aws-oidc-testing:tenant:acme:environment:dev", "example.octopus.app:aud": "example.octopus.app" } } ``` AWS policy conditions also support complex matching with wildcards and `StringLike` expressions. For example, to lock an identity role to any Octopus environment, you can update the conditions: ```json "Condition": { "StringLike": { "example.octopus.app:sub": "space:default:project:aws-oidc-testing:environment:*", "example.octopus.app:aud": "example.octopus.app" } } ``` `default` and `aws-oidc-testing` are the slugs of their respective Octopus resources. :::div{.hint} AWS steps can also defer to the IAM role assigned to the instance that hosts the Octopus Server for authentication. In this scenario there is no need to create the AWS account. ::: #### Passing Session Tags AWS Accounts can be configured to pass [session tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_adding-assume-role-idp) when assuming the AWS IAM role. This can be a useful tactic to allow using a single Octopus AWS Account and AWS IAM Role across many projects or environments, reducing configuration sprawl. To pass session tags, use the `Custom Claims` field on the AWS OIDC Account. The Claim should be `https://aws.amazon.com/tags`, and the Value should be a JSON object with a `principal_tags` property as documented in the [AWS docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_adding-assume-role-idp). The example below demonstrates supplying a session tag with a key of `octopus-project` and a value of the project name. ```json { "principal_tags": { "octopus-project": ["#{Octopus.Project.Name}"] }, "transitive_tag_keys": [ "octopus-project" ] } ``` ![AWS OIDC Custom Claim](/docs/img/infrastructure/accounts/aws/aws-oidc-custom-claim.png) You will need to [allow the sts:TagSession action](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_permissions-required) in the Trust relationships policy for the AWS role. For example: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::133577414924:oidc-provider/acme.octopus.app" }, "Action": [ "sts:AssumeRoleWithWebIdentity", "sts:TagSession"], "Condition": { "StringEquals": { "acme.octopus.app:aud": "acme.octopus.app" } } } ] } ``` These session tags can then be used to control access to AWS resources by [tagging the AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources). For example, the policy below allows starting and stopping EC2 instances which are tagged with a key of `octopus-project` and a value matching the project supplied in the session tags supplied as shown above. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:StartInstances", "ec2:StopInstances" ], "Resource": "arn:aws:ec2:*:*:instance/*", "Condition": { "StringEquals": {"aws:ResourceTag/octopus-project": "${aws:PrincipalTag/octopus-project}"} } }, { "Effect": "Allow", "Action": "ec2:DescribeInstances", "Resource": "*" } ] } ``` ![AWS IAM Policy](/docs/img/infrastructure/accounts/aws/aws-iam-ec2-start-stop-policy.png) ## AWS account variables You can access your AWS account from within projects through a variable of type **AWS Account Variable**. Learn more about [AWS Account Variables](/docs/projects/variables/aws-account-variables) ## Using AWS Service roles for an EC2 instance AWS allows you to assign a role to an EC2 instance, referred to as an [AWS service role for an EC2 instance](https://oc.to/AwsDocsRolesTermsAndConcepts), and that role can be accessed to generate the credentials that are used to deploy AWS resources and run scripts. All AWS steps execute on a worker. By default, that will be the [built-in worker](/docs/infrastructure/workers/#built-in-worker) in the Octopus Server. As such, Octopus Server itself would need to be run on an EC2 instance with an IAM role applied to take advantage of this feature. If you use [external workers](/docs/infrastructure/workers/#external-workers) which are their own EC2 instances, they can have their own IAM roles that apply when running AWS steps. :::div{.hint} When using the IAM role assigned to either the built-in worker or external worker EC2 instances, there is no need to create an AWS account in Octopus. ::: ## How Octopus workers inherit AWS IAM roles AWS provides many solutions to inherit IAM roles depending on the platform where the Octopus worker is run. The worker runs through a series of login processes to attempt to inherit an IAM role, assuming the first role that succeeds. The first login process attempts to inherit the role from a web identity token. [EKS clusters can host pods that use service accounts linked to IAM roles](https://oc.to/ConfiguringPodsToUseAKubernetesServiceAccount), and these pods expose an environment variable called `AWS_WEB_IDENTITY_TOKEN_FILE` containing the path to a token file mounted in the pod. If this environment variable is defined, the worker will assume the role associated with the web identity token file. :::div{.hint} When the `Execute using the AWS service role for an EC2 instance` option is enabled, a worker will first attempt to inherit a pod web identity. ::: The second login process queries the [Instance Metadata Service](https://oc.to/InstanceMetadataAndUserData) (IMDS), made available to EC2 instances. IMDS is exposed as an HTTP API accessed via `http://169.254.169.254`. The keys required to assume a service role associated with an EC2 instance are generated by calling the IMDS HTTP API. IMDS has two versions, v1, and v2. [IMDSv2 is available to all EC2 instances](https://oc.to/UseIMDSv2) and is optionally required over IMDSv1. Octopus uses IMDSv2 to inherit IAM roles. The worker assumes that IAM role if the request to generate account tokens from the IMDSv2 HTTP API succeeds. :::div{.hint} IMDSv2 adds a security measure limiting the network hops a request can make when accessing the HTTP API to one. This means requests made from a worker running on an EC2 instance will work as expected, as there is one hop from the worker to the IMDS HTTP API. However, if the Octopus worker is running in an EKS pod hosted on an EC2 node, requests to the IMDSv2 HTTP API will fail by default, as these requests make two hops: one from the worker to the pod and a second from the pod to the EC2 node. You can adjust the hop limit using the [modify-instance-metadata-options](https://oc.to/ModifyInstanceMetadataOptions) to allow requests with more than one hop. ::: ## Manually using AWS account details in a step A number of steps in Octopus use the AWS account directly. For example, in the CloudFormation steps, you define the AWS account variable that will be used to execute the template deployment, and the step will take care of passing along the access and secret keys defined in the account. It is also possible to use the keys defined in the AWS account manually, such as in script steps. First, add the AWS Account as a variable. In the screenshot below, the account has been assigned to the **AWS Account** variable. The **OctopusPrintVariables** has been set to true to print the variables to the output logs. This is a handy way to view the available variables that can be consumed by a custom script. You can find more information on debugging variables at [Debug problems with Octopus variables](/docs/support/debug-problems-with-octopus-variables). :::figure ![A project variables screen showing two variables. The first, OctopusPrintVariables, has a value of 'True'. The second, AWS Account, has a value with AWS Account type](/docs/img/infrastructure/accounts/aws/variables.png) ::: When running a step, the available variables will be printed to the log. In this example, the following variables are shown: For a Access Key Account: ```txt [AWS Account] = 'amazon-web-services-account' [AWS Account.AccessKey] = 'YOUR_ACCESS_KEY' [AWS Account.SecretKey] = '********' ``` For a OpenID Connect Account: ```txt [AWS Account] = 'amazon-web-services-account' [AWS Account.RoleArn] = 'arn:aws:iam::123456789012:role/test-role' [AWS Account.SessionDuration] = '3600' [AWS Account.OpenIdConnect.Jwt] = '********' ``` **AWS Account.AccessKey** is the access key associated with the AWS account, and **AWS Account.SecretKey** is the secret key. The secret key is hidden as asterisks in the log because it is a sensitive value, but the complete key is available to your script. You can then use these variables in your scripts or other step types. For example, the following PowerShell script would print the access key to the console. ```powershell Write-Host "$($OctopusParameters["AWS Account.AccessKey"])" ``` ## Known AWS connection issue If you are experiencing SSL/TLS connection errors when connecting to AWS from your Octopus Server, you may be missing the **Amazon Root CA** on your Windows Server. The certificates can be downloaded from the [Amazon Trust Repository](https://www.amazontrust.com/repository/). ## AWS deployments Learn more about [AWS deployments](/docs/deployments/aws). # Azure Web App targets Source: https://octopus.com/docs/infrastructure/deployment-targets/azure/web-app-targets.md Azure Web App deployment targets allow you to reference existing Web Apps in your Azure subscription, that you can then reference by [target tag](/docs/infrastructure/deployment-targets/target-tags) during deployments. :::div{.hint} From version 2022.1 Octopus can discover Azure Web App targets using tags on your Web App cloud resource template. ::: ## Requirements - You need an [Azure Service Principal account](/docs/infrastructure/accounts/azure/#azure-service-principal) that references your Azure subscription. - Once your Azure account is setup, you need an existing Azure Web App / App Service setup within your Azure subscription. To learn more about App Services, see the [Azure App Services documentation](https://docs.microsoft.com/en-us/azure/app-service/) that can help you get started. If you are dynamically creating the web app during your deployment, check our section about [discovering web app targets](#discovering-web-app-targets) or [creating Web App targets by scripts using service messages](#creating-web-app-targets-by-scripts). ## Discovering web app targets Octopus can discover Azure Web App targets as part of your deployment using tags on your resource. :::div{.hint} From **Octopus 2022.3**, you can configure the well-known variables used to discover Azure Web App targets when editing your deployment process in the Web Portal. See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. ::: To discover targets use the following steps: - Add an Azure account variable named **Octopus.Azure.Account** to your project. - [Add tags](/docs/infrastructure/deployment-targets/cloud-target-discovery/#tag-cloud-resources) to your Azure Web App so that Octopus can match it to your deployment step and environment. - Add a `Deploy an Azure App Service` or `Deploy an Azure Web App (Web Deploy)` step to your deployment process. During deployment, the target tag on the step will be used along with the environment being deployed to, to discover Azure Web App targets to deploy to. From **Octopus 2022.2**, deployment slots within an Azure Web App can also be discovered separately from the Web App it is a part of by adding tags to the slot. Any deployment slot discovered during deployment will be created as a separate target in Octopus. :::div{.hint} The name of discovered Azure Web Apps has changed in **Octopus 2022.2** to include additional information about the resource group. Any Web App targets discovered in **Octopus 2022.1** while this feature was in Early Access Preview will need to be deleted and will be rediscovered during the next deployment. ::: See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. ## Creating web app targets Once you have an App Service configured within your Azure subscription, you are ready to map that to an Octopus deployment target. To create an Azure Web App target within Octopus: - Navigate to **Deploy ➜ Infrastructure ➜ Deployment Targets ➜ Add Deployment Target**. - Select **Azure tab** and then select **Azure Web App** from the list of available targets and click _Next_. - Fill out the necessary fields, being sure to provide a unique target tag (formerly target role) that clearly identifies your Azure Web App target. :::figure ![](/docs/img/infrastructure/deployment-targets/azure/web-app-targets/create-azure-web-app-target.png) ::: :::div{.info} If you are using a **Standard** or **Premium** Azure Service Plan, you can also select a specific slot as your target. The _Azure Web App Slot_ field will allow you to select one of the slots available on the Web App. If there are no slots, this will be empty. You can also leave the slot selection blank and specify the slot, by name, on the step too. The slot selected on the step will take precedence over a slot defined on the deployment target. ::: - After clicking _Save_, your deployment target will be added and go through a health check to ensure Octopus can connect to it. - If all goes well, you should see your newly created target in your **Deployment Targets** list, with a status of _Healthy_. :::figure ![](/docs/img/infrastructure/deployment-targets/azure/web-app-targets/deployment-targets-web-app-healthy.png) ::: ### Creating Web App targets by scripts Azure Web App targets can also be created via a PowerShell cmdlet within a Deployment Process, this can be especially handy if you are also creating the Azure Web App via a script. See [Managing Resources by script](/docs/infrastructure/deployment-targets/dynamic-infrastructure) for more information on creating Azure Web Apps via a script. ## Troubleshooting If your Azure Web App target does not successfully complete a health check, you may need to check that your Octopus Server can communicate with Azure. It may be worth checking that your Azure Account is able to complete a _Save and Test_ to ensure Octopus can communicate with Azure. If your Octopus Server is behind a proxy or firewall, you will need to consult with your Systems Administrator to ensure it can communicate with Azure. ## Deploying to Web App targets To learn about deploying to Azure Web App targets, see our [documentation about this topic](/docs/deployments/azure/deploying-a-package-to-an-azure-web-app) # Create Azure Web App target command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/azure-web-app-target.md ## Azure Web App Command: **_New-OctopusAzureWebAppTarget_** | Parameter | Value | | ----------------------------------- | --------------------------------------------- | | `-name` | name for the Octopus deployment target | | `-azureWebApp` | Name of the Azure Web App | | `-azureWebAppSlot` | Name of the Azure Web App Slot | | `-azureResourceGroupName` | Name of the Azure Resource Group | | `-octopusAccountIdOrName` | Name or Id of the Account Resource in Octopus | | `-octopusRoles` | Comma separated list of [target tags](/docs/infrastructure/deployment-targets/target-tags) to assign | | `-updateIfExisting` | Will update an existing Web App target with the same name, create if it doesn't exist | | `-octopusDefaultWorkerPoolIdOrName` | Name or Id of the Worker Pool for the deployment target to use. (Optional). Added in 2020.6.0. | ### Examples ```powershell # Using default options New-OctopusAzureWebAppTarget -name "My Azure Web Application" ` -azureWebApp "WebApp1" ` -azureResourceGroupName "WebApp1-ResourceGroup" ` -octopusAccountIdOrName "Dev Azure Account" ` -octopusRoles "AzureWebApp" ` -updateIfExisting # Specifying a default worker pool for the target New-OctopusAzureWebAppTarget -name "My Azure Web Application" ` -azureWebApp "WebApp1" ` -azureResourceGroupName "WebApp1-ResourceGroup" ` -octopusAccountIdOrName "Dev Azure Account" ` -octopusRoles "AzureWebApp" ` -octopusDefaultWorkerPoolIdOrName "Worker Pool with Azure Access" ` -updateIfExisting ``` :::div{.hint} If your process creates dynamic deployment targets from a script, and then deploys to those targets in a subsequent step, make sure you add a full [health check](/docs/projects/built-in-step-templates/health-check) step for the role of the newly created targets after the step that creates and registers the targets. This allows Octopus to ensure the new targets are ready for deployment by staging packages required by subsequent steps that perform the deployment. ::: # Target tags Source: https://octopus.com/docs/infrastructure/deployment-targets/target-tags.md Before you can deploy software to your deployment targets, you need to associate them with target tags. This ensures you deploy the right software to the right deployment targets. Typical target tags include: - product-cluster - web-server - app-server - db-server Using target tags means the infrastructure in each of your environments doesn’t need to be identical and the deployment process will know which deployment targets to deploy your software to. Deployment targets: - Must have at least one tag - Can have multiple tags - Can share tags ## Add target tags \{#create-target-roles} Target tags are created and saved in the database as soon as you assign them to a deployment target. Decide on the naming convention you’ll use before creating your first target tag as you can’t change the case after the target tag has been created. For example, you can’t switch from all lowercase to camel case. To create a target tag: 1. Register a deployment target or click on an already registered deployment target and go to Settings. 2. In the Target Tags field, enter the target tag you’d like to use (no spaces). 3. Save the target settings. The target tag has been created and assigned to the deployment target and can be added to other deployment targets. You can check all the target tags assigned to your deployment targets from the **Infrastructure** tab. [Getting Started - Machine Roles](https://www.youtube.com/watch?v=AU8TBEOI-0M) ## Target tag sets \{#target-tag-sets} :::div{.warning} Target tag sets are supported from Octopus version 2026.2.2344. ::: Target tags are now organized into **target tag sets**, which are [tag sets](/docs/tenants/tag-sets) scoped to deployment targets. This gives you a structured way to group related target tags together, just as you can group tags for tenants, environments, projects, and runbooks. ### The [System] Tag Set \{#system-tag-set} All existing target tags are automatically migrated into a built-in tag set called **[System] Tag Set**. This set is managed by Octopus and cannot be renamed or deleted. ### Custom target tag sets \{#custom-target-tag-sets} In addition to the [System] Tag Set, you can create your own target tag sets to organize deployment targets by attributes relevant to your team. For example, you might create a tag set for cloud provider, region, or tier. To create a custom target tag set: 1. Go to **Deploy ➜ Tag Sets** and click **Add Tag Set**. 2. Give the tag set a name and select **Target** as the scope. 3. Choose the tag set type: - **MultiSelect:** Allow multiple tags from this set to be assigned to a target. - **SingleSelect:** Allow only one tag from this set to be assigned to a target. 4. Add the tags you want to include in the set. 5. Save the tag set. Each tag set appears as its own field in the deployment target settings, where you can assign tags from that set to a target. Learn more about [tag sets](/docs/tenants/tag-sets), including tag set types, scopes, and how to create and manage them. ### Tag validation in deployment steps \{#tag-validation} When you configure a deployment step, Octopus validates that each target tag referenced by the step exists in a tag set. If a tag does not exist in any tag set, it is displayed with a red **!** icon. Hovering over the icon shows a tooltip explaining that the tag does not exist in any tag set, and that you can remove it or recreate it by adding it to a tag set. This can happen if a tag was deleted from a tag set, or if the step references a tag that was never added to one. To resolve the warning, either: - Remove the tag from the step and replace it with a valid tag. - Add the tag to an existing tag set (or create a new one) so it becomes valid. The warning is informational — it does not prevent you from saving or deploying. However, a tag that doesn't exist in any tag set won't match any deployment targets. ### API compatibility \{#api-compatibility} The Octopus REST API represents target tags using a `roles` field on deployment target resources. This field continues to work as before — it returns and accepts the target tag values regardless of which tag set they belong to. No changes are required to existing API integrations. ## Older versions \{#older-versions} - **Target roles** were renamed to **target tags** in Octopus Deploy 2024.2. The functionality was unchanged. This was only a name change to make our terminology clearer. - **Target tags** were migrated to **target tag sets** (with a built-in [System] Tag Set) in Octopus version 2026.2.2344. Existing target tags were moved automatically with no action required. # Octopus Tentacle Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle.md When you deploy software to your servers, you'll typically need to install Tentacle, a lightweight agent service so they can communicate securely with the Octopus Server. :::div{.info} See a [comparison of agent-based vs agentless communications](/docs/infrastructure/deployment-targets/tentacle/agent-vs-agentless). ::: Tentacle can be installed on either [Windows](/docs/infrastructure/deployment-targets/tentacle/windows/) or [Linux](/docs/infrastructure/deployment-targets/tentacle/linux). When installed, Tentacles can: - Run as a service: - A Windows Service called **OctopusDeploy Tentacle**. - A Linux **systemd** service. - Wait for tasks from Octopus (deploy a package, run a script, etc). - Report the progress and results back to the Octopus Server. Before you install Tentacle, review the software and hardware requirements depending on your chosen OS: - Windows: - [The latest version of Tentacle](/docs/infrastructure/deployment-targets/tentacle/windows/requirements). - [Versions prior to Tentacle 3.1](/docs/infrastructure/deployment-targets/tentacle/windows/requirements/legacy-requirements). - Linux [system prerequisites](/docs/infrastructure/deployment-targets/linux/#requirements) ## Communication mode Tentacles can be configured to communicate in Listening mode or Polling mode. Listening mode is the recommended communication style. Learn about the differences between the two modes and when you might choose to use Polling mode instead of Listening mode on the [Tentacle communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) page. ## Download the Tentacle installer Octopus Tentacle is available to download for both Windows and Linux (GZip, APT, and RPM) from the [downloads page](https://octopus.com/downloads). ## Configure a Listening Tentacle (recommended) To install and configure Tentacles in listening mode, see either: - The [Windows Listening Tentacle installation docs](/docs/infrastructure/deployment-targets/tentacle/windows/#configure-a-listening-tentacle-recommended). - The [Linux Tentacle Automation scripts](/docs/infrastructure/deployment-targets/tentacle/linux/#automation-scripts), selecting the tab for either a Listening deployment target or worker for your Linux distro. ### Update your Tentacle firewall To allow your Octopus Server to connect to the Tentacle, you'll need to allow access to TCP port **10933** on the Tentacle (or the port you selected during the installation wizard). **Intermediary firewalls** Don't forget to allow access in any intermediary firewalls between the Octopus Server and your Tentacle (not just the Windows Firewall). For example, if your Tentacle server is hosted in Amazon EC2, you'll also need to modify the AWS security group firewall to tell EC2 to allow the traffic. Similarly, if your Tentacle server is hosted in Microsoft Azure, you'll also need to add an Endpoint to tell Azure to allow the traffic. ## Configure a Polling Tentacle Listening Tentacles are recommended, but there might be situations where you need to configure a Polling Tentacle. You can learn about the difference between Listening Tentacles and Polling Tentacles on the [Tentacle communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) page. To install and configure Tentacles in polling mode, see either: - The [Windows Polling Tentacle installation docs](/docs/infrastructure/deployment-targets/tentacle/windows#configure-a-polling-tentacle). - The [Linux Tentacle Automation scripts](/docs/infrastructure/deployment-targets/tentacle/linux/#automation-scripts), selecting the tab for either a Polling deployment target or worker for your Linux distro. ### Update your Octopus Server firewall To allow Tentacle to connect to your Octopus Server, you'll need to allow access to port **10943** on the Octopus Server (or the port you selected during the installation wizard - port 10943 is just the default). You will also need to allow Tentacle to access the HTTP Octopus Web Portal (typically port **80** or **443** - these bindings are selected when you [install the Octopus Server](/docs/installation)). If your network rules only allow port **80** and **443** to the Octopus Server, you can either: - Change the server bindings to either HTTP or HTTPS and use the remaining port for polling Tentacle connections. - The listening port Octopus Server uses can be [changed from the command line](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) using the `--commsListenPort` option. Even if you do use port **80** for Polling Tentacles, the communication is still secure. - Use a reverse proxy to redirect incoming connections to the Tentacle listening port on Octopus Server by differentiating the connection based on Hostname (TLS SNI) or IP Address - The polling endpoint Tentacle uses can be [changed from the command line](/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443/#self-hosted) using the `--server-comms-address` option. - You can learn about this configuration on the [Polling Tentacles over standard HTTPS port](/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443) page. Note that the port used to poll Octopus for jobs is different to the port used by your team to access the Octopus Deploy web interface; this is on purpose, and it means you can use different firewall conditions to allow Tentacles to access the Octopus Server by IP address. Using polling mode, you won't typically need to make any firewall changes on the Tentacle machine. ### Intermediary firewalls Don't forget to allow access not just in the OS Firewall (e.g. Windows Firewall), but also any intermediary firewalls between the Tentacle and your Octopus Server. For example, if your Octopus Server is hosted in Amazon EC2, you'll also need to modify the AWS security group firewall to tell EC2 to allow the traffic. Similarly if your Octopus Server is hosted in Microsoft Azure you'll also need to add an Endpoint to tell Azure to allow the traffic. # Linux Tentacle Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/linux.md If you want to deploy software to Linux servers without using [SSH](/docs/infrastructure/deployment-targets/linux/ssh-target), you need to install Tentacle, a lightweight agent service, on your Linux servers so they can communicate with the Octopus Server. ## Requirements Before you can configure a Linux Tentacle, the Linux server must meet the [requirements](/docs/infrastructure/deployment-targets/linux/#requirements) and the following additional requirements: - Octopus Server **2019.8.3** or newer Linux Tentacle is a .NET application distributed as a [self-contained deployment](https://docs.microsoft.com/en-us/dotnet/core/deploying/#publish-self-contained). On most Linux distributions it will *just work*, but be aware that [you will need to install some prerequisites](https://learn.microsoft.com/en-us/dotnet/core/install/linux-scripted-manual#dependencies). ## Known limitations Support for F# scripts are only available with **Mono 4** and above. While they require mono installed, they will still execute with the self-contained Calamari. Similarly, the F# interpreter has also not yet been ported for .NET Core ([GitHub issue](https://github.com/Microsoft/visualfsharp/issues/2407)). ## Downloads For Debian distributions, there is a .deb package for use with `apt-get`. On CentOS/Fedora/Redhat distributions, there is an .rpm package for use with `yum`. We also provide a .tar.gz archive for manual installations. The packages are available from: - apt.octopus.com - rpm.octopus.com - [Octopus Deploy downloads page](https://oc.to/ProductDownloadPage) The latest release of Linux Tentacle is available for download from: - [Archive](https://octopus.com/downloads/latest/Linux_x64TarGz/OctopusTentacle) - [APT](https://octopus.com/downloads/latest/Linux_x64Apt/OctopusTentacle) - [RPM](https://octopus.com/downloads/latest/Linux_x64Rpm/OctopusTentacle) ## Installing and configuring Linux Tentacle :::div{.hint} Many of the steps described below require elevated permissions, or must be run as a super user using `sudo`. ::: ### Installing Tentacle
Debian/Ubuntu repository ```bash sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ sudo install -m 0755 -d /etc/apt/keyrings && \ curl -fsSL https://apt.octopus.com/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/octopus.gpg && \ sudo chmod a+r /etc/apt/keyrings/octopus.gpg && \ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/octopus.gpg] https://apt.octopus.com/ \ stable main" | \ sudo tee /etc/apt/sources.list.d/octopus.list > /dev/null && \ sudo apt update && sudo apt install tentacle # for legacy Ubuntu/Debian (< 18.04) use # sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ # curl -sSfL https://apt.octopus.com/public.key | sudo apt-key add - && \ # sudo sh -c "echo deb https://apt.octopus.com/ stable main > /etc/apt/sources.list.d/octopus.com.list" && \ # sudo apt update && sudo apt install tentacle # for Raspbian use # sh -c "echo 'deb https://apt.octopus.com/ buster main' >> /etc/apt/sources.list" # apt-get update # apt-get install tentacle ```
Redhat/CentOS/Fedora repository ```bash wget https://rpm.octopus.com/tentacle.repo -O /etc/yum.repos.d/tentacle.repo yum install tentacle -y ```
Archive ```bash arch="x64" # arch="arm" # for Raspbian 32-bit # arch="arm64" # for 64-bit OS on ARM v7+ hardware wget https://octopus.com/downloads/latest/Linux_$($arch)TarGz/OctopusTentacle -O tentacle-linux_$($arch).tar.gz #or curl -L https://octopus.com/downloads/latest/Linux_$($arch)TarGz/OctopusTentacle --output tentacle-linux_$($arch).tar.gz mkdir /opt/octopus tar xvzf tentacle-linux_$($arch).tar.gz -C /opt/octopus ```
### Setting up a Tentacle instance Many instances of Tentacle can be configured on a single machine. To configure an instance run the following setup script: ```bash /opt/octopus/tentacle/configure-tentacle.sh ``` Additional instances of Tentacle can be created and configured by passing the `--instance $instanceName` argument to all commands listed here. :::div{.warning} The installer script does not make any adjustments to firewalls. Be sure to check any ports specified are open on your firewalls. For example port `10933` for listening tentacles. ::: ## Running Tentacle ### Running Tentacle interactively Start the Tentacle interactively by running: ``` /opt/octopus/tentacle/Tentacle run --instance ``` ### Running Tentacle as a service (systemd) Tentacle has command line options for configuring a systemd service: ``` Usage: Tentacle service [] Where [] is any of: --instance=VALUE Name of the instance to use --start Start the Tentacle service if it is not already running --stop Stop the Tentacle service if it is running --reconfigure Reconfigure the Tentacle service --install Install the Tentacle service --username, --user=VALUE Username to run the service under (DOMAIN\Username format). Only used when -- install or --reconfigure are used. --uninstall Uninstall the Tentacle service --password=VALUE Password for the username specified with -- username. Only used when --install or -- reconfigure are used. --dependOn=VALUE Or one of the common options: --help Show detailed help for this command ``` To install and start Tentacle as a service, use the `Tentacle service` command: ``` /opt/octopus/tentacle/Tentacle service --install --start ``` ### Manually configuring Tentacle to run as a service To manually configure a systemd service, use the following sample unit file: 1. Create a systemd **Unit file** to run Tentacle. ``` [Unit] Description=Octopus Tentacle Server After=network.target [Service] Type=simple User=root ExecStart=/opt/octopus/tentacle/Tentacle run --instance --noninteractive Restart=always [Install] WantedBy=multi-user.target ``` 2. Copy the unit file to `/etc/systemd/system` and give it permissions ``` sudo cp tentacle.service /etc/systemd/system/tentacle.service sudo chmod 644 /etc/systemd/system/tentacle.service ``` 3. Start the Tentacle service ``` sudo systemctl start tentacle ``` 4. Use the `enable` command to ensure that the service start whenever the system boots. ``` sudo systemctl enable tentacle ``` ## Automatic Tentacle upgrades Linux Tentacle can be upgraded via the Octopus portal from the **Infrastructure ➜ Deployment Targets** screen. The upgrade attempts to find a package manager capable of performing the upgrade, and then falls back to extracting a `tar.gz` archive to the Tentacle installation folder. The upgrade is attempted in the following order: - Attempt to use `apt-get` - Attempt to use `yum` - Extract the bundled `tar.gz` archive ## Uninstall Tentacle To uninstall (delete) a Tentacle instance run the `service --stop --uninstall` and then `delete-instance` commands first: ``` /opt/octopus/tentacle/Tentacle service --instance --stop --uninstall /opt/octopus/tentacle/Tentacle delete-instance --instance ``` The `service --stop --uninstall` command on the Tentacle will run the following commands to manage the systemd **Unit file**: ``` sudo systemctl stop tentacle sudo systemctl disable tentacle sudo rm /etc/systemd/system/tentacle.service ``` Then the working folders and logs can be deleted if they are no longer needed, depending on where you installed them, for instance: ``` # default locations: # - installed directory: cd /opt/octopus/tentacle # - logs: cd /etc/octopus # - application directory: cd /home/Octopus/Applications ``` ## Automation scripts The following bash scripts install, configure and register Linux Tentacle for use in automated environments: :::div{.hint} **Note:** - Many of the steps described below require elevated permissions, or must be run as a super user using `sudo`. - By default, when registering Linux Targets or Workers, the scripts below assume Octopus will communicate with the target or worker using the server hostname (from the `$HOSTNAME` variable). To provide a different address, consider looking up the hostname/IP address. For example: ```bash publicIp=$(curl -s https://ifconfig.info) ``` You can specify the address when using the [register-with](/docs/octopus-rest-api/tentacle.exe-command-line/register-with) command by providing the value to the `--publicHostName` parameter. ::: ### Debian
Listening deployment target ```bash serverUrl="https://my-octopus" # The url of your Octopus server thumbprint="" # The thumbprint of your Octopus Server apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal environment="Test" # The environment to register the Tentacle in role="web server" # The role to assign to the Tentacle configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ sudo install -m 0755 -d /etc/apt/keyrings && \ curl -fsSL https://apt.octopus.com/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/octopus.gpg && \ sudo chmod a+r /etc/apt/keyrings/octopus.gpg && \ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/octopus.gpg] https://apt.octopus.com/ \ stable main" | \ sudo tee /etc/apt/sources.list.d/octopus.list > /dev/null && \ sudo apt update && sudo apt install tentacle # for legacy Ubuntu/Debian (< 18.04) use # sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ # curl -sSfL https://apt.octopus.com/public.key | sudo apt-key add - && \ # sudo sh -c "echo deb https://apt.octopus.com/ stable main > /etc/apt/sources.list.d/octopus.com.list" && \ # sudo apt update && sudo apt install tentacle /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --port 10933 --noListen False --reset-trust --app "$applicationPath" /opt/octopus/tentacle/Tentacle configure --trust $thumbprint echo "Registering the Tentacle $name with server $serverUrl in environment $environment with role $role" /opt/octopus/tentacle/Tentacle register-with --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --env "$environment" --role "$role" /opt/octopus/tentacle/Tentacle service --install --start ```
Polling deployment target ```bash serverUrl="https://my-octopus" # The url of your Octopus server serverCommsPort=10943 # The communication port the Octopus Server is listening on (10943 by default) apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal environment="Test" # The environment to register the Tentacle in role="web server" # The role to assign to the Tentacle configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ sudo install -m 0755 -d /etc/apt/keyrings && \ curl -fsSL https://apt.octopus.com/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/octopus.gpg && \ sudo chmod a+r /etc/apt/keyrings/octopus.gpg && \ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/octopus.gpg] https://apt.octopus.com/ \ stable main" | \ sudo tee /etc/apt/sources.list.d/octopus.list > /dev/null && \ sudo apt update && sudo apt install tentacle # for legacy Ubuntu/Debian (< 18.04) use # sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ # curl -sSfL https://apt.octopus.com/public.key | sudo apt-key add - && \ # sudo sh -c "echo deb https://apt.octopus.com/ stable main > /etc/apt/sources.list.d/octopus.com.list" && \ # sudo apt update && sudo apt install tentacle /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the Tentacle $name with server $serverUrl in environment $environment with role $role" /opt/octopus/tentacle/Tentacle register-with --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --env "$environment" --role "$role" --comms-style "TentacleActive" --server-comms-port $serverCommsPort /opt/octopus/tentacle/Tentacle service --install --start ```
Listening worker ```bash serverUrl="https://my-octopus" # The url of your Octopus server thumbprint="" # The thumbprint of your Octopus Server apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal workerPool="Default Worker Pool" # The worker pool to register the Tentacle in configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ sudo install -m 0755 -d /etc/apt/keyrings && \ curl -fsSL https://apt.octopus.com/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/octopus.gpg && \ sudo chmod a+r /etc/apt/keyrings/octopus.gpg && \ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/octopus.gpg] https://apt.octopus.com/ \ stable main" | \ sudo tee /etc/apt/sources.list.d/octopus.list > /dev/null && \ sudo apt update && sudo apt install tentacle # for legacy Ubuntu/Debian (< 18.04) use # sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ # curl -sSfL https://apt.octopus.com/public.key | sudo apt-key add - && \ # sudo sh -c "echo deb https://apt.octopus.com/ stable main > /etc/apt/sources.list.d/octopus.com.list" && \ # sudo apt update && sudo apt install tentacle /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --port 10933 --noListen False --reset-trust --app "$applicationPath" /opt/octopus/tentacle/Tentacle configure --trust $thumbprint echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool" /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --workerPool "$workerPool" /opt/octopus/tentacle/Tentacle service --install --start ```
Polling worker ```bash serverUrl="https://my-octopus" # The url of your Octopus server serverCommsPort=10943 # The communication port the Octopus Server is listening on (10943 by default) apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal workerPool="Default Worker Pool" # The worker pool to register the Tentacle in configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ sudo install -m 0755 -d /etc/apt/keyrings && \ curl -fsSL https://apt.octopus.com/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/octopus.gpg && \ sudo chmod a+r /etc/apt/keyrings/octopus.gpg && \ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/octopus.gpg] https://apt.octopus.com/ \ stable main" | \ sudo tee /etc/apt/sources.list.d/octopus.list > /dev/null && \ sudo apt update && sudo apt install tentacle # for legacy Ubuntu/Debian (< 18.04) use # sudo apt update && sudo apt install --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ # curl -sSfL https://apt.octopus.com/public.key | sudo apt-key add - && \ # sudo sh -c "echo deb https://apt.octopus.com/ stable main > /etc/apt/sources.list.d/octopus.com.list" && \ # sudo apt update && sudo apt install tentacle /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool" /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --workerPool "$workerPool" --comms-style "TentacleActive" --server-comms-port $serverCommsPort /opt/octopus/tentacle/Tentacle service --install --start ```
### CentOS/Fedora/RedHat
Listening deployment target ```bash serverUrl="https://my-octopus" # The url of your Octous server thumbprint="" # The thumbprint of your Octopus Server apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal environment="Test" # The environment to register the Tentacle in role="web server" # The role to assign to the Tentacle configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" curl https://rpm.octopus.com/tentacle.repo -o /etc/yum.repos.d/tentacle.repo yum install tentacle -y /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --port 10933 --noListen False --reset-trust --app "$applicationPath" /opt/octopus/tentacle/Tentacle configure --trust $thumbprint echo "Registering the Tentacle $name with server $serverUrl in environment $environment with role $role" /opt/octopus/tentacle/Tentacle register-with --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --env "$environment" --role "$role" /opt/octopus/tentacle/Tentacle service --install --start ```
Polling deployment target ```bash serverUrl="https://my-octopus" # The url of your Octous server serverCommsPort=10943 # The communication port the Octopus Server is listening on (10943 by default) apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal environment="Test" # The environment to register the Tentacle in role="web server" # The role to assign to the Tentacle configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" curl https://rpm.octopus.com/tentacle.repo -o /etc/yum.repos.d/tentacle.repo yum install tentacle -y /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the Tentacle $name with server $serverUrl in environment $environment with role $role" /opt/octopus/tentacle/Tentacle register-with --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --env "$environment" --role "$role" --comms-style "TentacleActive" --server-comms-port $serverCommsPort /opt/octopus/tentacle/Tentacle service --install --start ```
Listening worker ```bash serverUrl="https://my-octopus" # The url of your Octous server thumbprint="" # The thumbprint of your Octopus Server apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal workerPool="Default Worker Pool" # The worker pool to register the Tentacle in configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" curl https://rpm.octopus.com/tentacle.repo -o /etc/yum.repos.d/tentacle.repo yum install tentacle -y /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --port 10933 --noListen False --reset-trust --app "$applicationPath" /opt/octopus/tentacle/Tentacle configure --trust $thumbprint echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool" /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --workerPool "$workerPool" /opt/octopus/tentacle/Tentacle service --install --start ```
Polling worker ```bash serverUrl="https://my-octopus" # The url of your Octous server serverCommsPort=10943 # The communication port the Octopus Server is listening on (10943 by default) apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal workerPool="Default Worker Pool" # The worker pool to register the Tentacle in configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" curl https://rpm.octopus.com/tentacle.repo -o /etc/yum.repos.d/tentacle.repo yum install tentacle -y /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool" /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --workerPool "$workerPool" --comms-style "TentacleActive" --server-comms-port $serverCommsPort /opt/octopus/tentacle/Tentacle service --install --start ```
### Archive :::div{.warning} **Note:** - Linux Arm and Arm64 support is currently **experimental**. - Requires Octopus Server 2020.5.0+ - Ensure you are using the correct architecture value for your platform (`x64`, `arm`, `arm64`). - Uncomment the appropriate `arch` variable in the script. :::
Listening deployment target ```bash serverUrl="https://my-octopus" # The url of your Octous server thumbprint="" # The thumbprint of your Octopus Server apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal environment="Test" # The environment to register the Tentacle in role="web server" # The role to assign to the Tentacle configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" arch="x64" # arch="arm" # for Raspbian 32-bit # arch="arm64" # for 64-bit OS on ARM v7+ hardware curl -L https://octopus.com/downloads/latest/Linux_${arch}TarGz/OctopusTentacle --output tentacle-linux_${arch}.tar.gz mkdir /opt/octopus tar xvzf tentacle-linux_${arch}.tar.gz -C /opt/octopus /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --port 10933 --noListen False --reset-trust --app "$applicationPath" /opt/octopus/tentacle/Tentacle configure --trust $thumbprint echo "Registering the Tentacle $name with server $serverUrl in environment $environment with role $role" /opt/octopus/tentacle/Tentacle register-with --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --env "$environment" --role "$role" /opt/octopus/tentacle/Tentacle service --install --start ```
Polling deployment target ```bash serverUrl="https://my-octopus" # The url of your Octous server serverCommsPort=10943 # The communication port the Octopus Server is listening on (10943 by default) apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal environment="Test" # The environment to register the Tentacle in role="web server" # The role to assign to the Tentacle configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" arch="x64" # arch="arm" # for Raspbian 32-bit # arch="arm64" # for 64-bit OS on ARM v7+ hardware curl -L https://octopus.com/downloads/latest/Linux_${arch}TarGz/OctopusTentacle --output tentacle-linux_${arch}.tar.gz mkdir /opt/octopus tar xvzf tentacle-linux_${arch}.tar.gz -C /opt/octopus /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the Tentacle $name with server $serverUrl in environment $environment with role $role" /opt/octopus/tentacle/Tentacle register-with --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --env "$environment" --role "$role" --comms-style "TentacleActive" --server-comms-port $serverCommsPort /opt/octopus/tentacle/Tentacle service --install --start ```
Listening worker ```bash serverUrl="https://my-octopus" # The url of your Octous server thumbprint="" # The thumbprint of your Octopus Server apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal workerPool="Default Worker Pool" # The worker pool to register the Tentacle in configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" arch="x64" # arch="arm" # for Raspbian 32-bit # arch="arm64" # for 64-bit OS on ARM v7+ hardware curl -L https://octopus.com/downloads/latest/Linux_${arch}TarGz/OctopusTentacle --output tentacle-linux_${arch}.tar.gz mkdir /opt/octopus tar xvzf tentacle-linux_${arch}.tar.gz -C /opt/octopus /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --port 10933 --noListen False --reset-trust --app "$applicationPath" /opt/octopus/tentacle/Tentacle configure --trust $thumbprint echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool" /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --workerPool "$workerPool" --space "$spaceName" /opt/octopus/tentacle/Tentacle service --install --start ```
Polling worker ```bash serverUrl="https://my-octopus" # The url of your Octous server serverCommsPort=10943 # The communication port the Octopus Server is listening on (10943 by default) apiKey="" # An Octopus Server api key with permission to add machines spaceName="Default" # The name of the space to register the Tentacle in name=$HOSTNAME # The name of the Tentacle at is will appear in the Octopus portal workerPool="Default Worker Pool" # The worker pool to register the Tentacle in configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" arch="x64" # arch="arm" # for Raspbian 32-bit # arch="arm64" # for 64-bit OS on ARM v7+ hardware curl -L https://octopus.com/downloads/latest/Linux_${arch}TarGz/OctopusTentacle --output tentacle-linux_${arch}.tar.gz mkdir /opt/octopus tar xvzf tentacle-linux_${arch}.tar.gz -C /opt/octopus /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" /opt/octopus/tentacle/Tentacle new-certificate --if-blank /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the Tentacle $name with server $serverUrl in worker pool $workerPool" /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --space "$spaceName" --name "$name" --workerPool "$workerPool" --comms-style "TentacleActive" --server-comms-port $serverCommsPort /opt/octopus/tentacle/Tentacle service --install --start ```
## Rootless Instance Creation {#rootless-instance-creation} Creating a named instance with the `--instance` parameter as shown in the examples above will register the instance details in a central registry to allow it to be easily managed via its unique name. Access to this central registry on the target machine is under `C:\ProgramData\Octopus` on Windows and `/etc/octopus` on other Platforms. For some high-security low-trust environments, access to these locations may not be possible, so Octopus supports creating Tentacle instances that isolate all their configuration in a single directory. Omitting the `--instance` and `--configuration` parameters from the [create-instance](/docs/octopus-rest-api/tentacle.exe-command-line/create-instance) command will create the `Tentacle.config` configuration file in the current working directory of the executing process. As such, it will not require any elevated permissions to create. However, relevant OS permissions may still be necessary depending on the ports used. To manage this instance, all ensuing commands are required to be run either with the executable being invoked from the context of the initial configuration directory or with the `--config` parameter pointing to the configuration file that was created in that directory. For example, running the following commands: ```bash mkdir ~/mytentacle && cd ~/mytentacle tentacle create-instance ``` will create a Tentacle configuration file in `~/mytentacle` without needing access to the shared registry (typically stored on Linux at `/etc/octopus`). Subsequent commands to this instance can be performed by running the command directly from that location: ```bash cd ~/mytentacle tentacle configure --trust F9EFD9D31A04767AD73869F89408F587E12CB23C ``` ### Service Limitations {#service-limitations} Due to the non-uniquely-named nature of these instances, only one such instance type can be registered as a service at any given time. An optional mechanism for running this instance is to use the [agent](/docs/octopus-rest-api/tentacle.exe-command-line/agent/) command to start and run the Tentacle process inline. The [delete-instance](/docs/octopus-rest-api/tentacle.exe-command-line/delete-instance) command will also have no effect, its purpose being largely to remove the instance details from the registry and preserving the configuration on disk. ## Disabling weak TLS protocols \{#disable-weak-tls-protocols} To harden the TLS implementation used, review our documentation on [Disabling weak TLS protocols](/docs/security/hardening-octopus/#disable-weak-tls-protocols). ## Learn more - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # Troubleshooting Polling Tentacles Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/troubleshooting/troubleshooting-polling.md ## Communication settings To verify the communication settings, *On the Tentacle machine*, open the Tentacle Manager application from the Start screen or Start menu. 1. Ensure that the Tentacle is in *polling* mode. Below the thumbprint, you should see the text *This Tentacle polls the Octopus Server...*. 2. Check the port that the Tentacle polls the Octopus Server on. 3. Check that the **Octopus Server** thumbprint shown in light gray in the Tentacle manager matches the one shown in the **Configuration ➜ Thumbprints** screen in the Octopus Web Portal. Note, that there are two thumbprints displayed - that of the Tentacle itself (shown first in bold) and the thumbprints of trusted servers (shown inline in the gray text). If any of the communications settings are incorrect, choose *Delete this Tentacle instance...*. After doing so, you'll be presented with the Tentacle installation wizard, where the correct settings can be chosen. If the settings are correct, continue to next step. ## Check the connections To help with diagnostics, we've included a welcome page you can connect to from your web browser. When you conduct these checks: - If you're presented with a prompt to "confirm a certificate" or "select a certificate" choose "Cancel" - don't provide one. - If you're presented with a warning about the invalidity of the site's certificate, "continue to the site" or "add an exception" (Octopus Server uses a self-signed certificate by default). :::div{.hint} Octopus Cloud instances have HSTS enabled so it will be impossible to bypass the "Your connection is not private" warning. ::: *On the Octopus Server machine*, open a web browser and navigate to `https://localhost:10943` (or your chosen Tentacle communications port if it isn't the default). Make sure an **HTTPS** URL is used. The page shown should look like the one below. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/images/3277906.png) ::: If you've made it this far, good news! Your Octopus Server is running and ready to accept inbound connections from Polling Tentacles. :::div{.hint} **If you can't browse to the page...** If this is where your journey ends, there's a problem on the Octopus Server machine itself. It is very likely that the Octopus Server is unable to open the communications port, either because of permissions, or because another process is listening on that port. Using the Windows `netstat -o -n -a -b` command can help to get to the bottom of this quickly. If you can see connections being opened and immediately closed (`CLOSE_WAIT` state in `netstat` output) from the same *Foreign Address*, it might indicate that this server is blocking traffic from the communications port and therefore resetting the connection immediately. Check both the built-in Windows Firewall, and any other firewalls (in Amazon EC2, check your security group settings for example) on the server identified by the **Foreign Address** in `netstat` and make sure that the communications port isn't being blocked. You can also use [Wireshark](https://www.wireshark.org/) to inspect traffic that is coming in on the Octopus Server communications port to find any connections that are being immediately reset by starting a network capture and filtering the traffic by `tcp.port == 10943` (or your chosen Tentacle communications port if it isn't the default), this should identify any incoming requests that gets reset immediately. If you're still in trouble, check the Octopus Server [log files](/docs/support/log-files) and contact Octopus Deploy support. ::: Next, repeat the process of connecting to the Octopus Server with a web browser, but do this *from the Tentacle machine*. When forming the URL to check: - First try using the Octopus Server's DNS hostname, e.g. `https://my-octopus:10943`. - If this fails, try using the Octopus Server's IP address instead, e.g. `https://1.2.3.4:10943` - success using the IP address but not the DNS hostname will indicate a DNS issue. **If you can't connect...** Failing to connect at this step means that you have a network issue preventing traffic between the Tentacles and Octopus Server. Check that the Octopus Server polling port is open in any firewalls, and that other services on the network are working. There's not usually much that Octopus Deploy Support can suggest for these issues as networks are complex and highly varied. Having the network administrator from your organization help diagnose the issue is the best first step. If that draws a blank, please get in touch. Remember to check both the built-in Windows Firewall, and any other firewalls (in Amazon EC2, check your security group settings for example). If the Tentacle welcome page is shown, good news - your network is fine. :::div{.problem} **Watch out for proxy servers or SSL offloading...** Octopus and Tentacle use TCP to communicate, with special handling to enable web browsers to connect for diagnostic purposes. Full HTTP is not supported, so network services like **SSL offloading** are not supported, and **proxies** are not supported in earlier versions of Octopus Deploy. Make sure there's a direct connection between the Octopus Server and Tentacle, without an HTTP proxy or a network appliance performing SSL offloading in between. Also see, [advanced support for HTTP proxies](/docs/infrastructure/deployment-targets/proxy-support). ::: ## Tentacle ping We have built a small utility for testing the communications protocol between two servers called [Tentacle Ping](https://github.com/OctopusDeploy/TentaclePing). This tool helps isolate the source of communication problems without needing a full Octopus configuration. It is built as a simple client and server component that emulates the communications protocol used by Octopus Server and Tentacle. In **Octopus 3.0** you will need **TentaclePing** and **TentaclePong**, you cannot test directly to Octopus Server nor Tentacle: - Run **TentaclePing** on your Tentacle machine (which is the client in this relationship). - Run **TentaclePong** on your Octopus Server machine (which is the server in this relationship). Use the output to help diagnose what is going wrong. ## Check the IP address Your Octopus Server or Tentacle Server may have multiple IP addresses that they listen on. For example, in Amazon EC2 machines in a VPN might have both an internal IP address and an external addresses using NAT. Octopus Server and Tentacle Server may not listen on all addresses; you can check which addresses are configured on the server by running `ipconfig /all` from the command line and looking for the IPv4 addresses. ## Schannel and TLS configuration mismatches Octopus uses `Schannel` for secure communications and will attempt to use the best available protocol available to both servers. If you are seeing error messages like below, try [Troubleshooting Schannel and TLS](/docs/security/octopus-tentacle-communication/troubleshooting-schannel-and-tls): Client-side:`System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception. ---> System.ComponentModel.Win32Exception: One or more of the parameters passed to the function was invalid` Server-side:`System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.` ## Other error messages **Halibut.Transport.Protocol.ConnectionInitializationFailedException: Unable to process remote identity; unknown identity 'HTTP/1.0'** If a Tentacle health-check fails with an error message containing this error message, then there is network infrastructure inserting a web page into the communication. The most common components to do this are firewalls and proxy servers so it's recommend to check your network setup to verify connectivity between the two servers using the information above and then update your infrastructure appropriately. **Halibut.HalibutClientException: An error occurred when sending a request to 'https://my-tentacle:10933', before the request could begin: Attempted to read past the end of the stream.** If your Octopus server certificate was [generated with SHA1](/docs/security/cve/shattered-and-octopus-deploy) then you might get this error when connecting to modern Linux distributions, as the default security configuration now rejects communication using SHA1. To regenerate your Octopus server certificate, follow the documentation [How to regenerate certificates with Octopus Server and Tentacle](/docs/security/octopus-tentacle-communication/regenerate-certificates-with-octopus-server-and-tentacle). ## Check the server service account permissions For polling Tentacles we need to check the Octopus Server is running as the *Local System* account. If it is, you can skip this section. If the Octopus Server is running as a specific user, make sure that the user has "full control" permission to the *Octopus Home* folder on the machine. This is usually `C:\Octopus` - apply permissions recursively. ## Check the load time In some DMZ-style environments without Internet access, failing to disable Windows code signing certificate revocation list checking will cause Windows to pause during loading of the Octopus applications and installers. This can have a significant negative performance impact, which may prevent Octopus and Tentacles connecting. To test this for a polling Tentacle, on the Octopus Server, run: ```powershell Octopus.Server.exe help ``` If the command help is not displayed immediately (< 1s) you may need to consider disabling the CRL check while the Octopus Server is configured. To do this open **Control Panel ➜ Internet Options ➜ Advanced**, and uncheck the *Check for publisher's certificate revocation* option as shown below. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/images/5865771.png) ::: ## Uninstall Tentacles If you get to the end of this guide without success, it can be worthwhile to completely remove the Tentacle configuration, data, and working folders, and then reconfigure it from scratch. This can be done without any impact to the applications you have deployed. Learn about [manually uninstalling Tentacle](/docs/administration/managing-infrastructure/tentacle-configuration-and-file-storage/manually-uninstall-tentacle). Working from a clean slate can sometimes expose the underlying problem. # Legacy Tentacle installation requirements Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/requirements/legacy-requirements.md These are the installation requirements for Tentacles prior to **Tentacle 3.1**. If you're using **Tentacle 3.1** or later, refer to these [installation requirements](/docs/infrastructure/deployment-targets/tentacle/windows/requirements). ## Windows Server - Windows Server 2003 SP2 (**Not supported for Tentacle 3.1 and up due to .NET 4.5 dependency**). - Windows Server 2008 (**SP1 not supported for Tentacle 3.1 and up due to .NET 4.5 dependency**). ## .NET Framework - Tentacle 3.0 (TLS 1.0): .NET Framework 4.0+ ([download](http://www.microsoft.com/en-au/download/details.aspx?id=17851)). ## Windows PowerShell - Windows PowerShell 2.0. This is automatically installed on 2008 R2, but for 2008 pre-R2 you'll need to install it ([x86 download](http://www.microsoft.com/download/en/details.aspx?id=11829&__hstc=254453975.06c54f702f3aed3215f4224e6b75b56f.1380851265147.1386910090621.1387188601891.78&__hssc=254453975.2.1387188601891&__hsfp=4151299608), [x64 download](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=20430&__hstc=254453975.06c54f702f3aed3215f4224e6b75b56f.1380851265147.1386910090621.1387188601891.78&__hssc=254453975.2.1387188601891&__hsfp=4151299608)). - Windows PowerShell 3.0 or 4.0 is recommended, both of which are compatible with PowerShell 2.0, but execute against .NET 4.0+. - Windows Server 2003 servers will need [Windows Management Framework](https://www.microsoft.com/en-ca/download/details.aspx?id=34595) installed (this includes PowerShell). ## Hardware requirements - Hardware minimum: 512MB RAM, 1GHz CPU, 2GB free disk space. Tentacle uses a small amount of memory when idle, usually around 10MB (it may appear higher in task manager because memory is shared with other .NET processes that are running). When deploying, depending on what happens during the deployment, this may expand to 60-100MB, and will then go back down after the deployment is complete. Tentacle will happily run on single-core machines, and only uses about 100MB of disk space, though of course you'll need more than that to deploy your applications. # Environments Source: https://octopus.com/docs/infrastructure/environments.md Environments are how you organize your deployment targets (whether on-premises servers or cloud services) into groups that represent the different stages of your deployment pipeline, for instance, **development**, **test**, and **production**. Organizing your deployment targets into environments lets you define your deployment processes (no matter how many deployment targets or steps are involved) and have Octopus deploy the right versions of your software to the right environments at the right time. You can manage your environments by navigating to **Infrastructure ➜ Environments** in the Octopus Web Portal: ![The environments area of Octopus Deploy](/docs/img/shared-content/concepts/images/environments.png) ## Environment configuration Since environments are the phases that you move your code through, they form the backbone of your deployment pipeline. Before you configure anything else, you should configure your environments. The most common setup is four environments. These are: 1. **Development** or **Dev** for short, is for developers to experiment on. It's generally in flux, and can often be expected to be unavailable. 1. **Test/QA** - Quality assurance teams test functionality in the Test environment. 1. **Staging/Pre-Production** - Staging is used as a final sanity check before deploying to Production. 1. **Production** is where your end users normally use your software outside of testing. However, we didn't design Octopus Deploy to force people to use a set of predefined environments. Some companies only have three environments. Others have many more. Likewise, not everyone names their environments the same way. One person's Test is another person's QA. It's important that you can name your environments in the way that best supports your organization's needs. Take a look at our [environment recommendations](/docs/infrastructure/environments/environment-recommendations) section for more tips. ## Add new environments {#add-new-environments} 1. Navigate to **Infrastructure ➜ Environments** and click **ADD ENVIRONMENT**. 1. Give your new environment a meaningful name and click **SAVE**. You can click on the Development, Test, or Production links in the help text to choose that name. You can add as many environments as you need, and you can reuse your environments with different projects so there's no need to create environments per project. :::div{.hint} In general, keep the number of environments under ten. Having fewer environments makes configuring and maintaining your Octopus Server easier. ::: ## Edit your environments To edit individual environments, click the overflow menu (...) for that environment. From here, it is possible to edit the environment, description, change the [guided failure mode](/docs/releases/guided-failures#enable-guided-failure-mode-for-an-environment), enable or disable [dynamic infrastructure](/docs/infrastructure/deployment-targets/dynamic-infrastructure), or delete the environment. ## Environment tags :::div{.warning} From Octopus Cloud version **2025.4.3897** we support tagging environments. ::: You can create and assign tags to environments. This allows you to: - Classify environments by attributes like cloud provider, region, or tier. - Filter environments by tags on the Environments page. - Configure your deployment dashboard to display only environments with specific tags. :::div{.hint} Only tags from tag sets that have been configured with the **Environment** scope can be used to tag environments. ::: Learn more about [tag sets](/docs/tenants/tag-sets), including tag set types, scopes, and how to create and manage them. ## Environment permissions You can control who has access to view, edit, and deploy to environments by assigning users to Teams and assigning roles to those teams. For more information, see the section on [managing users and teams](/docs/security/users-and-teams). ## Manage your environments If you're working with a large number of environments and deployment targets, the **Environments** page makes it easy to sort, filter, and view your environments and the deployment targets that belong to each environment. ## Sort your environments Click the overflow menu (...) on the environments sections to reveal the **reorder** menu and access a drag and drop pane to sort your environments. The order that environments are shown in the environments tab also affects: - The order that they are shown in the Dashboard. - The order that they are listed when choosing which environment to deploy a release to. It's a good idea to put your least production-like environments first, and the most production-like environments last. ## Use advanced filters You can use advanced filters to search your environments by clicking on **SHOW ADVANCED FILTERS** from the environment page. This will let you search by: - Name - Deployment target - Environment - Environment tags - Target tags - Health Status - Communication style ## Removing environments For projects using Config as Code, it's up to you to take care to avoid deleting any environments required by your deployments or runbooks. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. ## Learn more Learn how to add and manage your [deployment targets](/docs/infrastructure/deployment-targets). # Environment recommendations Source: https://octopus.com/docs/infrastructure/environments/environment-recommendations.md In this section, we will walk through our recommendations for configuring your environments to better prepare you to scale your Octopus Deploy instance up and out as you add more projects. ## Environment terminology We recommend configuring your environments to match your company's terminology. Try to keep naming as general as possible. Sometimes it helps to consider how you would phrase it during a conversation with a colleague: > "I'm pushing some code up to Dev" or > "I'm deploying my app to Production" These are clearer than: > "I'm pushing to Dev Omaha 45." Without context, it's not clear what _Omaha_ refers to, or what the significance of _45_ is. A good sign that you have well-modeled environments is that the names don't need an explanation. You should consider changing a name if it is not clear. ## Keep environment numbers low In general, try to keep the number of environments under ten. Having fewer environments makes configuring and maintaining your Octopus Server easier. We recommend creating a standard set of environments. For example, Dev, Test, Staging, and Production. If you have [dynamic infrastructure](/docs/infrastructure/deployment-targets/dynamic-infrastructure), you might also need SpinUp, TearDown, and Maintenance. :::figure ![The Environment overview](/docs/img/shared-content/octopus-recommendations/images/environment-list.png) ::: If you need to change the order of your environments later, you can use the [sort](/docs/infrastructure/environments/#sort-your-environments) option. ### Deployment targets and environments Octopus will choose the targets to deploy to based on the environment, target roles, and if configured, tenants. This guarantees that the release is deployed to the appropriate targets. Sharing the same environments across projects ensures a consistent and maintainable Octopus experience. ## Common environment scenarios In this section, we walk you through some common scenarios we've seen with environments and how to work through them. ### Multiple Data Centers Cloud providers such as Azure, AWS, and Google Cloud make deploying to many data centers commonplace. You might need to deploy the software at specific intervals or in a specific order. For example, you might deploy to a data center in Illinois before deploying to one in Texas. It can be tempting to name environments to match a data center location. For example _Production [Data Center]_ or _Production Omaha_. This is convenient as you can deploy to an individual data center at a time. You can also see what version of code is deployed in each data center. Unfortunately, this doesn't scale very well. Every time you add a new data center, your infrastructure and Octopus configuration will need modification, such as: - Adding a new environment. - Updating lifecycles. - Updating any variables with environment scoped values. One scenario we've seen is customers deploy to an on-premises data center for dev and test. Separate data centers host staging and production environments in say Illinois and Texas. Before promoting code to production, they perform sanity checks in the staging environment. If you create an environment per data center, you'd have seven environments when you actually only need four. :::figure ![Multi-tenancy Environments](/docs/img/shared-content/octopus-recommendations/images/multi-tenancy-environments.png) ::: Creating seven environments like this doesn't scale. A better solution is using the [multi-tenancy](/docs/tenants/) feature in Octopus. With multi-tenancy, each data center is modeled as a tenant. To add a new tenant, follow our instructions on how to [create a tenant](/docs/tenants/tenant-creation). :::figure ![Data Center tenants](/docs/img/shared-content/octopus-recommendations/images/data-center-tenants.png) ::: :::div{.hint} **Tip:** Adding images to your tenants makes them easier to find. You can do this by clicking on the tenant and choosing **Settings**, and you can choose an image to upload in the **Logo** section. ::: Once added, connect each project and any deployment targets you wish them to be linked to. When choosing a release, select which data center tenant to deploy to. In this example, Illinois or Texas. ### Multiple Customers We also see customers deploy the same project to multiple clients. Each of their customers gets their own set of machines and other resources. It's possible to configure a unique set of environments for each customer. You could create: - _Dev [Customer Name]_ - _Staging [Customer Name]_ - _Production [Customer Name]_ This will work for the first few customers, but again, it doesn't scale very well. Imagine if you had five clients: 1. An internal testing customer 1. Coca-Cola 1. Ford 1. Nike 1. Starbucks. The internal customer deploys to all environments, dev, test, staging, and production. Coca-Cola and Nike deploy to test, staging, and production. Ford and Starbucks only deploy to staging and production. If you create an environment per tenant, you'd have fourteen environments. And that is only for five customers. This is where the [multi-tenancy](/docs/tenants) feature in Octopus again shines. It allows you to keep the number of environments low while creating a unique workflow per client. :::figure ![Tenants as Customers](/docs/img/shared-content/octopus-recommendations/images/multi-tenancy-customers.png) ::: ## Conclusion In summary, the thing to remember is to keep the number of Octopus environments you create low. Leverage tenants to handle deployments to different data centers or customers. # Storage Source: https://octopus.com/docs/infrastructure/workers/kubernetes-worker/storage.md The Kubernetes worker requires a common filesystem to share packages with its spawned operation pods. This filesystem stores binary packages received from the Octopus Server, which are used by the operation being executed. The Kubernetes worker's storage setup and constraints are identical to the [Kubernetes Agent storage](/docs/kubernetes/targets/kubernetes-agent/storage). # AWS File Storage Source: https://octopus.com/docs/installation/file-storage/aws-file-storage.md AWS has multiple storage options to choose from: - Elastic Block Store (EBS) - Elastic File System (EFS) - FSx ## Elastic Block Store (EBS) AWS EBS is limited in that it can be attached to only one EC2 or container instance and is not an option for High Availability. ## Elastic File System (EFS) EFS is perhaps the most versatile storage options that AWS offers. EFS works with EC2 instances, ECS services, and EKS clusters. If you intend on running [Octopus Deploy Server as a Linux Container](https://octopus.com/docs/installation/octopus-server-linux-container), EFS is likely going to be your only option. ### EC2 Amazon provides an [easy way](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html) to connect Linux based EC2 instances to EFS. As noted on https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html, Amazon does not support mounting EFS to EC2 instances running Windows. Windows, however, has an NFS client that can be installed to configure and access NFS shares. See more information about [Windows NFS and Octopus Deploy](/docs/installation/file-storage/windows-nfs). ## FSx `Amazon FSx` includes full support for the SMB protocol, Windows NTFS, and **requires** Microsoft Active Directory (AD) integration. This makes it an ideal choice for connecting to your EC2 instances hosting Octopus to store all your Octopus packages and log files. If you choose to go with Amazon FSx there are some resources that will help you get started: - AWS have a [starter guide](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/getting-started.html) which explains how to configure Amazon FSx and connect it up to an EC2 machine. - AWS have a [hands-on lab](https://aws.amazon.com/blogs/storage/how-to-replicate-amazon-fsx-file-server-data-across-aws-regions/) on using DataSync to support multi-region FSx data across AWS regions. This could be useful when considering disaster recovery options for Octopus High Availability. - We have an [AWS FSx High Availability blog post](https://octopus.com/blog/aws-fsx-ha) which is a step-by-step guide to connecting Amazon FSx to your Octopus High Availability Server nodes on Windows. ## High Availability With Octopus Deploy's [High Availability](/docs/administration/high-availability) functionality, you connect multiple nodes to the same database and file storage. Octopus Server makes specific assumptions about the performance and consistency of the file system when accessing log files, performing log retention, storing deployment packages and other deployment artifacts, exported events, and temporary storage when communicating with Tentacles. What that means is: - Octopus Deploy is sensitive to network latency. It expects the file system to be hosted in the same data center as the virtual machines or container hosts running the Octopus Deploy Service. - It is extremely rare for two or more nodes to write to the same file at the same time. - It is common for two or more nodes to read the same file at the same time. In our experience, you will have the best experience when all the nodes and the file system are located in the same data center. Modern network storage devices and operating systems handle almost all the scenarios a highly available instance of Octopus Deploy will encounter. # AWS Load Balancers Source: https://octopus.com/docs/installation/load-balancers/aws-load-balancers.md To distribute traffic to the Octopus web portal on multiple nodes, you need to use a HTTP load balancer. AWS provides a solution to distribute HTTP/HTTPS traffic to EC2 instances, Elastic Load Balancing is a highly available, secure, and elastic load balancer. There are three implementations of ELB; - [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) - [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) - [Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) - [Comparison Table](https://aws.amazon.com/elasticloadbalancing/features/#Product_comparisons) ## Tentacle If you are *only* using [Listening Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended), we recommend using the Application Load Balancer. However, [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) don't work well with the Application Load Balancer, so instead, we recommend using the Network Load Balancer. To setup a Network Load Balancer for Octopus High Availability with Polling Tentacles take a look at this [knowledge base article](https://help.octopus.com/t/how-can-i-configure-my-polling-tentacles-to-hit-my-octopus-deploy-high-availability-instance-to-sitting-behind-an-aws-load-balancer/24890). ## gRPC gRPC traffic can be routed with either the Application Load Balancer which provides first class support for gRPC or with the Network/Classic Load Balancer. If you choose to use the Application Load Balancer you might come across certain errors like the following from our gRPC clients(Kubernetes Monitor/Argo CD Gateway) ```go stream terminated by RST_STREAM with error code: PROTOCOL_ERROR ``` This is due to how AWS's Application Load Balancer ignores HTTP/2 PING Frames which causes the long lived gRPC streams we use to be ended after 60 seconds of idle time by default. [See the AWS docs for more details](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/edit-load-balancer-attributes.html#connection-idle-timeout). To prevent this error from occurring the Application Load Balancer needs to be configured with an increased idle timeout. This can be done by setting the following attribute `idle_timeout.timeout_seconds` to a value between 1-4000 seconds. # Migrate to Octopus Server Linux Container from Windows Container Source: https://octopus.com/docs/installation/octopus-server-linux-container/migration/migrate-to-server-container-linux-from-windows-container.md The Octopus Server Windows Container has been deprecated starting from **Octopus 2020.6**. We made this decision because the uptake was low, and Microsoft has stopped supporting the OS Versions we were publishing (Windows [1809](https://docs.microsoft.com/en-us/windows/release-health/status-windows-10-1809-and-windows-server-2019), [1903](https://docs.microsoft.com/en-us/lifecycle/announcements/windows-10-1903-end-of-servicing), and [1909](https://docs.microsoft.com/en-us/windows/release-health/status-windows-10-1909)). Going forward, we will only publish the [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container). :::div{.hint} We will continue to publish Windows Docker images for Tentacle. Once we've updated the Windows images for Tentacle to more modern OS versions, we will deprecate the existing Windows 1809/1903/1909 images. ::: This guide will help you migrate from the Octopus Server Windows Container to the Octopus Server Linux Container. It assumes you are already familiar with running Octopus Deploy in a container and is meant to address the differences you will encounter when switching over. ## Differences between Windows and Linux Containers This guide is designed to address the differences between the Windows and Linux Containers. - **Folder Paths:** Windows Containers follow the Windows folder structure with `\` slashes, for example, `C:\Octopus\Tasklogs`. Linux Containers follow a Linux folder structure, including `/` slashes. - **Pre-installed software:** Windows Containers include PowerShell and .NET 4.7.2 (or 4.8) but not Bash. Linux Containers typically include PowerShell Core and Bash but not .NET. - **Software support:** The Linux Container doesn't support running F# scripts directly on the server. - **Authentication:** The Octopus Server Linux Container doesn't support Active Directory authentication. Some users have had success using Active Directory with the Octopus Server Windows Container, but any workarounds won't work with the Linux Container. If you want to use Active Directory, you must connect to it via the [LDAP authentication provider](/docs/security/authentication/ldap). :::div{.hint} The LDAP authentication provider was introduced in Octopus Deploy **2021.2**. ::: ## Prep Work We recommend making the following changes and testing them on your existing Octopus Deploy instance before the move. This prep work will keep the number of changes made during the actual migration low. ### Migrate from Active Directory to LDAP Migrating from Active Directory to LDAP is not as simple as turning off Active Directory authentication and enabling LDAP authentication. As far as Octopus is concerned, they are two separate auth providers. Having Active Directory and LDAP enabled is treated the same as having Google Auth and LDAP enabled. Both users and teams are associated with 0 to N external identities. The external identities are stored in an array on the user or team object. For example, a user object with both Active Directory and LDAP could appear as: ```json { "Id": "Users-1", "Username": "professor.octopus", "DisplayName": "Professor Octopus", "IsActive": true, "IsService": false, "EmailAddress": "professor.octopus@octopus.com", "CanPasswordBeEdited": true, "IsRequestor": true, "Identities": [ { "IdentityProviderName": "Active Directory", "Claims": { "email": { "Value": "", "IsIdentifyingClaim": true }, "upn": { "Value": "professor.octopus@mycustomdomain.local", "IsIdentifyingClaim": true }, "sam": { "Value": "\\professor.octopus", "IsIdentifyingClaim": true }, "dn": { "Value": "Professor Octopus", "IsIdentifyingClaim": false } } }, { "IdentityProviderName": "LDAP", "Claims": { "email": { "Value": null, "IsIdentifyingClaim": true }, "upn": { "Value": "professor.octopus@mycustomdomain.local", "IsIdentifyingClaim": true }, "uan": { "Value": "professor.octopus", "IsIdentifyingClaim": true }, "dn": { "Value": "Professor Octopus", "IsIdentifyingClaim": false } } } ] } ``` To migrate from Active Directory to LDAP, you will need to: 1. Enable and configure the [LDAP auth provider](/docs/security/authentication/ldap). 2. Add the LDAP auth provider to each user and group. We created two scripts to help speed that up: - [Swap Active Directory groups with matching LDAP groups](/docs/octopus-rest-api/examples/users-and-teams/swap-ad-domain-group-with-ldap-group) for Octopus teams. - [Swap Active Directory login records with matching LDAP ones](/docs/octopus-rest-api/examples/users-and-teams/swap-users-ad-domain-to-ldap) for Octopus users. 3. Log out with your current user and log back in, ideally with a new test user. 4. Verify permissions are as expected. 5. Test a few more users out. 6. Disable the Active Directory auth provider. ### Configure a Windows Worker If you currently have many PowerShell and C# script steps configured to run on the Octopus Server, you will need to configure a Windows Worker to handle that responsibility. Under the covers, the Octopus Server includes a [built-in worker](/docs/security/built-in-worker). When you configure a step to run on the Octopus Server, it runs on the built-in worker. Switching from the Windows to the Linux Container means changing the underlying OS those steps previously ran on. If your scripts are not PowerShell Core compliant, this means they will fail. The vast majority of scripts we encounter work with both PowerShell 5.1 and PowerShell Core. However, if you have a lot of older scripts, there is a chance they could fail. Instead of running directly on the Octopus Server's built-in worker, you will need to offload that work onto Windows [workers](/docs/infrastructure/workers). When you create your first worker, you will notice a pre-existing worker pool, `Default Worker Pool`. When the `Default Worker Pool` does not have any workers, all tasks run configured to run on the Octopus Server run on the built-in worker. The fastest way to change all the steps configured to run on the Octopus Server to run on a worker is to add a worker to the `Default Worker Pool`. However, doing so is also the riskiest as you cause a lot of deployments to fail. Our recommendation is to keep that risk to a minimum. 1. Create a new worker pool, `Windows Worker Pool`. 1. Create the new Windows Servers and configure them as workers. Register them to the `Windows Worker Pool`. 1. Pick a handful of projects and update the deployment process to use the new `Windows Worker Pool`. 1. Create some test releases and deployments to ensure the new Windows Workers are working correctly. 1. Assuming the testing is successful, you can add those workers to the `Default Worker Pool` or update the remaining steps. ### Copy Files Octopus Deploy stores all the BLOB data (deployment logs, runbook logs, packages, artifacts, event exports etc.) on a file share. If you are moving from a single server, be it hosting Octopus in a Windows Container or directly on a Windows VM, you will need to copy files to your new storage provider. Once your shared storage provider has been created, you'll want to copy files over from these folders: - TaskLogs - Artifacts - Packages - EventExports If you are moving from a Windows VM, the default path for those folders is: `C:\Octopus`. For example, the task logs folder would be `C:\Octopus\TaskLogs`. If you are unsure of the path, you can find it in the Octopus Deploy UI by navigating to **Configuration ➜ Settings ➜ Server Folders**. :::div{.warning} Failure to copy files over to the new storage location for the Linux Container to access will result in the following: - Existing deployment and runbook run screens will be empty. - Project, Step Template, and Tenant images will not appear. - Attempting to download any existing artifacts will fail. - If you are using the built-in repository, any existing deployments that use packages hosted there will fail as they won't be able to access them. ::: ### Polling Tentacles Polling Tentacles are designed to handle connection interruptions. For example, when the Octopus Server is restarted. When the Octopus Server comes back online, any running Polling Tentacles will re-connect. If you are currently using Polling Tentacles, you will need to ensure: 1. The same server URL will be used after the move. 1. You enable the communication port used (default: `10943`) on the Octopus Server Linux Container. If you wish to use a new URL, you will need to run this script on each machine hosting the polling tentacles. Replace the server and API key with values specific to your instance. Windows: ``` C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=https://your-octopus-url --apikey=API-YOUR-KEY --server-comms-port=10943 ``` Linux: ``` /opt/octopus/tentacle/Tentacle poll-server --server=https://your-octopus-url --apikey=API-YOUR-KEY --server-comms-port=10943 ``` ## Folder paths The Dockerfile runs the Octopus Server installer each time the Octopus Server Windows Container or Octopus Server Linux Container starts up. The installer runs a series of commands to configure Octopus Deploy. The installer will run the [path](/docs/octopus-rest-api/octopus.server.exe-command-line/path) command to update the paths to leverage the different folder structure. For example: ``` ./Octopus.Server path --instance OctopusServer --nugetRepository "/repository" --artifacts "/artifacts" --taskLogs "/taskLogs" --eventExports "/eventExports" --cacheDirectory="/cache" --skipDatabaseCompatibilityCheck --skipDatabaseSchemaUpgradeCheck ``` Just like the Octopus Server Windows Container, you will want to provide the following volume mounts. | Name | | | ------------- | ------- | |**/repository**|Package path for the built-in package repository| |**/artifacts**|Path where artifacts are stored| |**/taskLogs**|Path where task logs are stored| |**/cache**|Path where cached files are stored| |**/eventExports**|Path where event audit logs are exported| If you are running Octopus Server directly on Docker, read the Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only) about mounting volumes. You will need to update your Docker compose or Docker run command to point your existing folders to the new volume mounts. If you are running Octopus Server on Kubernetes, you will want to configure [persistent volume mounts](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). :::div{.hint} Due to how paths are stored, you cannot run an Octopus Server Windows Container and Octopus Server Linux Container simultaneously. It has to be all Windows or all Linux. ::: ## Database connection string and master key Just as it is with Octopus Server running on Windows (VM or Container), you will need to provide the database connection string and master key to the Octopus Server Linux Container. The underlying database technology Octopus Deploy relies upon, SQL Server, has not changed. The connection string format is the same, so you shouldn't need to change anything. ## Server thumbprint The certificate backing the server thumbprint is stored in the database. Any tentacles that trust your existing server thumbprint will continue to work as-is when you move to the Octopus Server Linux Container. ## Outage window Migrating to the Octopus Server Linux Container will require an outage window. The steps to perform during the outage window are: 1. Back up the master key. 1. Enable [Maintenance Mode](/docs/administration/managing-infrastructure/maintenance-mode) to prevent anyone from deploying or making changes during the transition. 1. Shut down the existing Octopus Deploy instance. 1. Perform a final file copy to pick up any new files. 1. Start up the Octopus Server Linux Container. 1. Perform some test deployments, verify you can view pre-existing deployment logs and runbook runs. Verify all images appear. 1. Update any Octopus Server DNS entries. 1. Disable Maintenance Mode. ## Further Reading This guide is meant to address the differences you may encounter when switching from the Octopus Server Windows Container to the Octopus Server Linux Container. For a deeper dive in how to run the Octopus Server Linux Container please refer to [this documentation](/docs/installation/octopus-server-linux-container). # Octopus Server Container with systemd Source: https://octopus.com/docs/installation/octopus-server-linux-container/systemd-service-definition.md You can use `systemd` to boot the Octopus Server Linux container each time the OS starts. To do this, create a file called `/etc/systemd/system/docker-octopusdeploy.service` with the following contents: :::div{.hint} Be sure to change the `ADMIN_PASSWORD` and `MASTER_KEY` from the defaults shown here. ::: ``` [Unit] Description=Daemon for octopusdeploy After=docker-mssql.service docker.service Wants= Requires=docker-mssql.service docker.service StartLimitIntervalSec=20 StartLimitBurst=3 [Service] Restart=on-failure TimeoutStartSec=0 RestartSec=5 Environment="HOME=/root" SyslogIdentifier=docker-octopusdeploy ExecStartPre=-/usr/bin/docker create --net octopus -m 0b -e "ADMIN_USERNAME=admin" -e "ADMIN_EMAIL=example@example.org" -e "ADMIN_PASSWORD=Password01!" -e "ACCEPT_EULA=Y" -e "DB_CONNECTION_STRING=Server=mssql,1433;Database=Octopus;User Id=SA;Password=Password01!;ConnectRetryCount=6" -e "MASTER_KEY=6EdU6IWsCtMEwk0kPKflQQ==" -e "DISABLE_DIND=Y" -p 80:8080 -p 10943:10943 -p 8443:8443 --restart=always --name octopusdeploy octopusdeploy/octopusdeploy ExecStart=/usr/bin/docker start -a octopusdeploy ExecStop=-/usr/bin/docker stop --time=0 octopusdeploy [Install] WantedBy=multi-user.target ``` Note that we assume a Docker bridge network called `octopus` exists. This can be created with the command: ```bash docker network create -d bridge octopus ``` The Octopus service also relies on a MS SQL service define in the file `/etc/systemd/system/docker-mssql.service` with the following contents: ``` [Unit] Description=Daemon for mssql After=docker.service Wants= Requires=docker.service StartLimitIntervalSec=20 StartLimitBurst=3 [Service] Restart=on-failure TimeoutStartSec=0 RestartSec=5 Environment="HOME=/root" SyslogIdentifier=docker-mssql ExecStartPre=-/usr/bin/docker create --net octopus -m 0b -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password01!" -e "MSSQL_PID=Express" -e "MSSQL_MEMORY_LIMIT_MB=2048" -p 1433:1433 --restart=always --name mssql mcr.microsoft.com/mssql/server:2019-latest ExecStart=/usr/bin/docker start -a mssql ExecStop=-/usr/bin/docker stop --time=0 mssql [Install] WantedBy=multi-user.target ``` To load the new service files, run: ```bash systemctl daemon-reload ``` Then start the services with the commands: ```bash systemctl start docker-mssql systemctl start docker-octopusdeploy ``` # AWS RDS Source: https://octopus.com/docs/installation/sql-database/aws-rds.md Each Octopus Server node stores project, environment, and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it's important that the database server is also highly available. To host the Octopus SQL database in AWS, there are two options to consider: - [SQL Server on EC2](https://docs.aws.amazon.com/sql-server-ec2/latest/userguide/sql-server-on-ec2-overview.html) - To run SQL Server on a VM, please refer to our [self-managed SQL Server guide](/docs/installation/sql-database/self-managed-sql-server). - [AWS RDS for SQL Server](https://aws.amazon.com/rds/sqlserver/) ## High Availability The database is a critical component of Octopus Deploy. If the database is lost or destroyed all your configuration will be lost with it. We highly recommend leveraging a combination of backups and SQL Server's high availability functionality. We recommend using [Multi-AZ](https://aws.amazon.com/rds/features/multi-az/) with RDS to ensure the resilience and availability of your Octopus database. This will ensure that your data is replicated across multiple availability zones (Multi-AZ) within your region. In the event of a failure of the primary zone, Multi-AZ automatically switches to the secondary zone. Multi-AZ supports SQL Server Database Mirroring (DBM) or Always On Availability Groups (AG), refer to to [Multi-AZ deployments for Amazon RDS for Microsoft SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerMultiAZ.html) for more details. ## Disaster Recovery Amazon provides [several mechanisms](https://docs.aws.amazon.com/prescriptive-guidance/latest/dr-standard-edition-amazon-rds/introduction.html) for Disaster Recovery (DR) should an entire region fail. We recommend reviewing the [DR solution matrix](https://docs.aws.amazon.com/prescriptive-guidance/latest/dr-standard-edition-amazon-rds/dr-matrix.html) to determine which method best suits your organizations needs. # Applied Manifest Diffs Source: https://octopus.com/docs/kubernetes/deployment-verification/applied-manifests/diffs.md Sometimes it is difficult to know what changes have occurred to your Kubernetes manifests between deployments. Being able to compare the changes in manifests between deployments allows for easier resolution and debugging of issues. By enabling the `Show Diffs` toggle on the `Applied Manifests` view on the `KUBERNETES` tab. :::figure ![A screenshot of the Kubernetes Applied Manifests diffs toggle](/docs/img/deployments/kubernetes/deployment-verification/applied-manifests-diffs-toggle.png) ::: ## How it works When `Show Diffs` is toggled on, the current Deployment is compared to the previous Deployment. You can change the deployment to compare against using the drop down. :::figure ![A screenshot of the Kubernetes Applied Manifests diffs deployment selector](/docs/img/deployments/kubernetes/deployment-verification/applied-manifests-diffs-selector.png) ::: When comparing manifests between Deployments, Octopus uses the Resource information (Name, Namespace, Kind) as well as the Deployment step to match manifests. This means that if a resource has been renamed, or changes namespaces, it will be shown as a new manifest, not an changed manifest. Current deployments can only be compared with older ones. To compare a deployment with a more recent one, first select the recent deployment in the timeline, and then compare it to your current one. ### Manifest differences In the navigation tree on the left hand side, there are 4 icons that indicate the changes for a manifest between deployments. | Label | Icon | Description | | :---------------------------- | :--------------------------------------------: | :--------------------------------------------------------------------------------------------------- | | Manifest changed in VERSION | | The manifest was changed in the current deployment | | Manifest added in VERSION | | The manifest was added in the current deployment | | Manifest not found in VERSION | | The manifest was not found in the current deployment. This maybe for a number of reasons (See below) | | Manifest unchanged in VERSION | | The manifest was unchanged in the current deployment | #### Why is my manifest showing as removed? There is a couple of reasons why a manifest may show as removed when compared to a previous Deployment. They are: 1. The manifest is no longer in the source Git repository or Package or Inline Yaml 2. The resource name or namespace changed 3. The step that manifest was deployed in was not executed (either due to being skipped or not executed due to rules) 4. The step was removed and re-added between deployments In cases 2,3 and 4; as Octopus Server matches manifests on the resource details and the step details, any changes to these will result in the ### Diff options Next to the `Show Diffs` toggle, there is a menu for changing diff options. This allows you to either view the diffs in split view or a unified view and allows you hide manifests that were unchanged. :::figure ![A screenshot of the Kubernetes Applied Manifests diffs menu](/docs/img/deployments/kubernetes/deployment-verification/diffs-menu.png) ::: ## Kubernetes Secret resources and Octopus sensitive variable changes As detailed in the [Applied Manifest](/docs/kubernetes/deployment-verification/applied-manifests#kubernetes-secret-resources-and-octopus-sensitive-variables) documentation, Octopus will obfuscate values in Kubernetes Secrets and well as any identified sensitive Octopus variable. When performing a diff, Octopus continues to obfuscate the secrets, but will still indicate if the obfuscated value has changed between deployments. :::figure ![A screenshot of the Kubernetes Applied Manifests diffs for secrets](/docs/img/deployments/kubernetes/deployment-verification/secret-diffs.png) ::: ## Can I compare to my live resources? The `Applied Manifest` view allows users to independently compare manifests generated at each step. In contrast, the live view aggregates the manifests, displaying the combined manifest from all steps completed during a deployment. You cannot compare or view the combined manifest on this page. Navigate to the Live page for the combined manifest. To learn more about the live status page and combined manifests, see the docs [here](/docs/kubernetes/live-object-status). # Deploy Kubernetes YAML Source: https://octopus.com/docs/kubernetes/steps/yaml.md Octopus supports the deployment of Kubernetes resources through the `Deploy Kubernetes YAML` step. This step lets you configure Kubernetes manually, leveraging the full power of Octopus features to support your setup. This approach is more flexible and gives you complete control over the YAML but requires deeper knowledge of Kubernetes configuration. ## YAML Sources You can source your YAML from three different sources: - Git Repository - Package - Inline Script ### Git Repository :::div{.warning} Sourcing from a Git repository clones the whole repository onto Octopus Server during a deployment. Ensure that you **do not have any sensitive data** in your git repository. ::: Sourcing from a Git Repository can streamline your deployment process by reducing the amount of steps required to get your YAML into Octopus. In Octopus, when YAML is sourced from a Git repository, we call it a Git Manifest. To configure a Git Repository source, select the `Git Repository` option as your YAML Source. :::figure ![Deploy Kubernetes YAML with a Git Manifest](/docs/img/deployments/kubernetes/deploy-raw-yaml/git-repository.png) ::: :::div{.hint} When you choose the tip of a branch for your Git Manifest when creating a Release, the commit hash is saved to the Release. This means redeploying that release will only ever use that specific commit and not the _new_ tip of the branch. ::: ### Package Sourcing from a Package is the traditional way to load data from external sources. You can specify the Package Feed and Package ID as well as a path or paths† to the file(s) in the package that you want to deploy. To configure a package source, select the `Package` option as your YAML Source. :::figure ![Deploy Kubernetes YAML with a Package](/docs/img/deployments/kubernetes/deploy-raw-yaml/package.png) ::: †In 2023.3, sourcing from packages can take advantage of [Glob Patterns and Multiple Paths](/docs/deployments/kubernetes/deploy-raw-yaml#glob-patterns-and-multiple-paths). ### Inline YAML The simplest way to get going with this step is to use Inline YAML. You can create your YAML resources in the inline editor which will be saved in the project in Octopus. To configure an inline YAML source, select the `Inline YAML` as your YAML Source. :::figure ![Deploy Kubernetes YAML with an Inline Script](/docs/img/deployments/kubernetes/deploy-raw-yaml/inline-yaml.png) ::: :::div{.warning} This is **not** the recommended approach for advanced cases as it does not allow for version management unless you are using it in conjunction with [Config As Code](/docs/projects/version-control). ::: ## Referencing packages Container images can be selected as **[Referenced Packages](/docs/deployments/custom-scripts/run-a-script-step#referencing-packages)** to automatically generate variables referring to the image name and tag that can be substituted in your manifests. For a package with the name `nginx`, you can substitute the image repository with `#{Octopus.Action.Package[nginx].PackageId}` and the tag with `#{Octopus.Action.Package[nginx].PackageVersion}`. The tag is selected when creating the release, allowing you to create new releases without any changes to your YAML manifests. ### Automatically creating releases Using referenced images with your deploy YAML step allow [external feed triggers](/docs/projects/project-triggers/external-feed-triggers) to automatically create releases when one or more new images are pushed to your registries. Further to this, [lifecycles](/docs/releases/lifecycles) can be used to fully automate deploying your releases to selected environments. ## Glob Patterns and Multiple Paths {#glob-patterns-and-multiple-paths} The Git Repository and Package data sources require you to specify which files you would like to apply from the git repo or package. Previously we only allowed a single file to be applied via an explicit path. In release 2023.3, we have added the ability to source multiple files via multiple paths for both Git Repositories and Packages. There are a few different ways to take advantage of this feature: 1. You can list several paths by separating them with a new line. ```text deployments/apply-first.yaml services/apply-second.yml ``` **Note:** _Each path will be applied in order from top to bottom._ 2. You can use a glob pattern to select multiple files in a single path. ```text **/*.{yaml,yml} ``` **Note:** *All files matching a glob pattern will be applied at the same time.* 3. Both options at the same time. This gives you the best of both worlds! **Note:** *If multiple glob patterns find the same file, the file will be applied twice.* [Learn more about glob patterns](/docs/deployments/kubernetes/glob-patterns). :::div{.hint} **Step updates** **2024.1:** - `Deploy Raw Kubernetes YAML` was renamed to `Deploy Kubernetes YAML`. - If you store your project configuration in a Git repository using the [Configuration as code feature](/docs/projects/version-control), you can source your YAML from the same Git repository as your deployment process by selecting Project as the Git repository source. When creating a Release, the commit hash used for your deployment process will also be used to source the YAML files. You can learn more in [this blog post](https://octopus.com/blog/git-resources-in-deployments). **2023.3:** - Sourcing from Git Repositories was added. You can learn more in [this blog post](https://octopus.com/blog/manifests-from-git). ::: # Octopus Kubernetes agent permissions Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/permissions.md The Kubernetes agent uses service accounts to manage access to cluster objects. There are 2 main components that run with different permissions in the Kubernetes agent: - **Agent Pod** - This is the main component and is responsible for receiving work from Octopus Server and scheduling it in the cluster. - **Script Pods** - These are run to execute work on the cluster. When Octopus issues work to the agent, the Tentacle will schedule a pod to run the script to execute the required work. These are short-lived, single-use pods which are removed by Tentacle when they are complete. ## Agent Pod Permissions The agent pod uses a service account which only allows the agent to create, view and modify pods, pod logs, config maps, and secrets in the agent namespace. Adjusting these permissions is not supported. | Variable Name | Description | Default Value | | :--------------------------------- | :--------------------------------------- | :---------------------- | | `agent.serviceAccount.name` | The name of the agent service account | `-tentacle` | | `agent.serviceAccount.annotations` | Annotations given to the service account | `[]` | ## Script Pod Permissions By default, the script pods (the pods which run your deployment steps) are given cluster wide admin access to deploy any and all cluster objects in any namespaces as configured in your deployment processes. The service account for script pods can be customized in a few ways: | Variable Name | Description | Default Value | | :-------------------------------------------- | :--------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `scriptPods.serviceAccount.targetNamespaces` | Limit the namespaces that the service account can interact with. | `[]`
(When empty, all namespaces are allowed.) | | `scriptPods.serviceAccount.clusterRole.rules` | Give the service account custom rules |
- apiGroups:
  - '\*'
  resources:
  - '\*'
  verbs:
  - '\*'
- nonResourceURLs:
  - '\*'
  verbs:
  - '\*'
| | `scriptPods.serviceAccount.name` | The name of the scriptPods service account | `-scripts` | | `scriptPods.serviceAccount.annotations` | Annotations given to the service account | `[]` | ### Examples
Target Namespaces `scriptPods.serviceAccount.targetNamespaces`
**command:** ```bash helm upgrade --install --atomic \ --set scriptPods.serviceAccount.targetNamespaces="{development,preproduction}" \ --set agent.acceptEula="Y" \ --set agent.targetName="Nonproduction Agent" \ --set agent.serverUrl="http://localhost:5000/" \ --set agent.serverCommsAddress="http://localhost:10943/" \ --set agent.space="Default" \ --set agent.targetEnvironments="{Development,Preproduction}" \ --set agent.targetRoles="{k8s-cluster-tag}" \ --set agent.bearerToken="XXXX" \ --version "1.*.*" \ --create-namespace --namespace octopus-agent-my-agent \ my-agent\ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ```
Cluster Role Rules `scriptPods.serviceAccount.clusterRole.rules`
**values.yaml:** ```yaml scriptPods: serviceAccount: clusterRole: rules: - apiGroups: - '*' resources: - 'configmaps' - 'deployments' - 'services' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*' agent: acceptEula: 'Y' targetName: 'No Secret Access Production Agent' serverUrl: 'http://localhost:5000/' serverCommsAddress: 'http://localhost:10943/' space: 'Default' targetEnvironments: - 'Production' targetRoles: - 'k8s-cluster-tag' bearerToken: 'XXXX' ```
**command:** ```bash helm upgrade --install --atomic \ --values values.yaml \ --version "1.*.*" \ --create-namespace --namespace octopus-agent-my-agent\ my-agent \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ```
# Rollback Kubernetes deployment Source: https://octopus.com/docs/kubernetes/tutorials/kubernetes-rollbacks.md This guide will walk through rolling back a Kubernetes Deployment. The example application used in this guide is a containerized version of [PetClinic](https://bitbucket.org/octopussamples/petclinic/src/master/) that will create the following pods: - MySQL database container for the backend (deployment) - Flyway database migrations container (job) - PetClinic web frontend container (deployment) Database rollbacks are out of scope for this guide; refer to this [article](https://octopus.com/blog/database-rollbacks-pitfalls), which discusses methods of database rollbacks and when they are appropriate. ## Existing deployment process For this guide, we'll start with an existing deployment process for deploying PetClinic to a Kubernetes cluster: 1. Deploy MySQL Container 1. Deploy PetClinic web 1. Run Flyway job 1. Verify the deployment 1. Notify stakeholders :::figure ![](/docs/img/deployments/patterns/rollbacks/kubernetes/octopus-original-deployment-process.png) ::: :::div{.success} View that deployment process on [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/01-kubernetes-original/deployments/process). Please lo gin as a guest. ::: ## Zero-configuration Rollback The easiest way to rollback to a previous version is to: 1. Find the release you want to roll back. 2. Click the **REDEPLOY** button next to the environment you want to roll back. That redeployment will work because a snapshot is taken when you create a release. The snapshot includes: - Deployment Process - Project Variables - Referenced Variables Sets - Package Versions Re-deploying the previous release will re-run the deployment process as it existed when that release was created. By default, the deploy package steps (such as deploy to IIS or deploy a Windows Service) will extract to a new folder each time a deployment is run, perform the [configuration transforms](/docs/projects/steps/configuration-features/structured-configuration-variables-feature/), and [run any scripts embedded in the package](/docs/deployments/custom-scripts/scripts-in-packages). :::div{.hint} Zero Configuration Rollbacks should work for most our customers. However, your deployment process might need a bit more fine-tuning. The rest of this guide is focused on disabling specific steps during a rollback process. ::: ## Simple rollback process The most common reason for a rollback is something is wrong with the release. In these cases, you'll want to block the bad release from [moving forward](/docs/releases/prevent-release-progression). The updated deployment process for a simple rollback would look like this: 1. Calculate Deployment Mode 1. Deploy MySQL Container (skip during rollback) 1. Deploy PetClinic web 1. Run Flyway job (skip during rollback) 1. Verify the deployment 1. Notify stakeholders 1. Block Release Progression (only during rollback) :::figure ![](/docs/img/deployments/patterns/rollbacks/kubernetes/octopus-simple-rollback-process.png) ::: :::div{.success} View that deployment process on [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/02-kubernetes-simple-rollback/deployments/process). Please log in as a guest. ::: ### Calculate deployment mode Calculate Deployment Mode is a [community step template](https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5/actiontemplate-calculate-deployment-mode) created by Octopus Deploy. It compares the release number being deployed with the current release number for the environment. When the release number is greater than the current release number, it is a deployment. When it is less, then it is a rollback. The step templates sets a number of [output variables](/docs/projects/variables/output-variables), including ones you can use in variable run conditions. ### Skipping database steps Deploying the MySQL container and running the Flyway job should be skipped in a rollback scenario. For this guide, we're going to assume that the deployment of the MySQL container wasn't the cause for the rollback. As previously stated, database rollbacks are out of scope for this guide, so we'll skip the Flyway Job container as well. To ensure that both of those steps are not run during a rollback, use the following output variable from the `Calculate Deployment Mode` step as the Variable Run condition. ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnDeploy} ``` :::div{.hint} When viewing the deployment process at a glance, it is not readily apparent that a step has a run condition associated with it. Octopus Deploy provides a `Notes` field for each step where you can add information such as in which conditions the step will run as a way of self-documentation. ::: ### Block release Progression Blocking Release Progression is an optional step to add to your rollback process. [The Block Release Progression](https://library.octopus.com/step-templates/78a182b3-5369-4e13-9292-b7f991295ad1/actiontemplate-block-release-progression) step template uses the API to [prevent the rolled back release from progressing](/docs/releases/prevent-release-progression). This step includes the following parameters: - Octopus Url: #{Octopus.Web.BaseUrl} (default value) - Octopus API Key: API Key with permissions to block releases - Release Id to Block: #{Octopus.Release.CurrentForEnvironment.Id} (default value) - Reason: This can be pulled from a manual intervention step or set to `Rolling back to #{Octopus.Release.Number}` This step will only run on a rollback; set the run condition for this step to: ``` #{Octopus.Action[Calculate Deployment Mode].Output.RunOnRollback} ``` To unblock that release, go to the release page and click the **UNBLOCK** button. ## Complex rollback process A feature of Kubernetes is the revision history of the cluster components. The command `kubectl rollout history deployment.v1.apps/` lists all deployment revisions. ``` REVISION CHANGE-CAUSE 1 2 3 ``` Using this feature, we can create a rollback process that would allow us to roll back quickly. The new deployment process would look like this: 1. Calculate Deployment Mode 1. Rollback Reason (only during rollback) 1. Deploy MySQL Container (skip during rollback) 1. Deploy PetClinic web 1. Run Flyway job (skip during rollback) 1. Verify the deployment 1. Notify stakeholders 1. Rollback to the previous version for PetClinic Web (only during rollback) 1. Block Release Progression (only during rollback) :::figure ![](/docs/img/deployments/patterns/rollbacks/kubernetes/octopus-complex-rollback-process.png) ::: :::div{.success} View that deployment process on [samples instance](https://samples.octopus.app/app#/Spaces-762/projects/03-kubernetes-complex-rollback/deployments/process). Please log in as a guest. ::: Next, we'll go through the newly added and altered steps: ### Rollback reason This is a [Manual Intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step that prompts the user for the reason they are rolling back. The text entered is stored in an output variable which will be used in the Block Release Progression step further down the process. ### Deploy PetClinic Web The revision history command for Kubernetes showed that there were multiple revisions stored within Kubernetes for deployment. However, it's not obvious as to which revision belongs to which Octopus release. Adding a `kubernetes.io/change-cause` annotation to the `Deploy PetClinic Web` step would add the Octopus Release Number as the `change-cause` so we could later parse it for which revision to roll back to. :::figure ![](/docs/img/deployments/patterns/rollbacks/kubernetes/octopus-k8s-deployment-annotation.png) ::: Running `kubectl rollout history deployment.v1.apps/` would now show something like this. ``` REVISION CHANGE-CAUSE 1 2021.09.23.0 2 2021.09.23.1 3 2021.09.23.2 ``` ### Rollback to the previous version for Petclinic Web Using the annotation from the `Deploy PetClinic Web`, we can use the following script to identify the revision we want to roll back to and perform the rollback using the built-in functionality of Kubernetes. This step uses the `Run a Kubectl CLI Script` step with the following code. ```powershell # Init variables $k8sRollbackVersion = 0 $rollbackVersion = $OctopusParameters['Octopus.Release.Number'] $namespace = $OctopusParameters['Project.Namespace.Name'] $deploymentName = $OctopusParameters['Project.Petclinic.Deployment.Name'] # Get revision history Write-Host "Getting deployment $deploymentName revision history ..." $revisionHistory = (kubectl rollout history deployment.v1.apps/$deploymentName -n $namespace) $revisionHistory = $revisionHistory.Split("`n") # Loop through history starting at index 2 (the first couple of lines aren't versions) Write-Host "Searching revision history for version $rollbackVersion ..." for ($i = 2; $i -lt $revisionHistory.Count - 1; $i++) { # Split it into two array elements $revisionSplit = $revisionHistory[$i].Split(" ", [System.StringSplitOptions]::RemoveEmptyEntries) # Check version if ($revisionSplit[1] -eq $rollbackVersion) { # Record version index Write-Host "Version $rollbackVersion found!" $k8sRollbackVersion = $revisionSplit[0] # Get out of for break } } # Check to see if something was found if ($k8sRollbackVersion -gt 0) { # Issue rollback Write-Host "Rolling Kubernetes deployment $deploymentName to revision $k8sRollbackVersion ..." kubectl rollout undo deployment.v1.apps/$deploymentName -n $namespace --to-revision=$k8sRollbackVersion } else { Write-Error "Version $rollbackVersion not found in the cluster revision history." } ``` ### Block release progression The `Rollback Reason` step captures the reason for the rollback. We can pass the text entered in this step to the `Reason` field of this step by using the following output variable. ``` #{Octopus.Action[Rollback reason].Output.Manual.Notes} ``` ## Choosing a rollback strategy It is our recommendation that you start with the simple rollback strategy, moving to the complex if you determine that the simple method doesn't suit your needs. # Prompt-based project creation Source: https://octopus.com/docs/octopus-ai/assistant/project-creation.md The Octopus AI Assistant can create fully configured deployment projects from a simple text prompt, helping you get started with deployments quickly. Instead of manually setting up project configurations, deployment processes, targets, and environments, you can describe what you want to deploy and let the AI Assistant generate a complete project based on proven best practices. The assistant generates resource configuration in [HCL](https://developer.hashicorp.com/terraform/language) (HashiCorp Configuration Language) through the chat interface for you to review before approving or aborting the deployment. We've trained the large language model used by the Octopus AI Assistant with hand-crafted template projects that bake in best practices for common deployment scenarios. ## Creating a project with Octopus AI Assistant When you launch the Octopus AI Assistant, one of the examples is to create a new project: ![Octopus AI Assistant default prompt window](/docs/img/octopus-ai-assistant/octopus-ai-assistant-project-creation-examples.png) Selecting this will present you with our pre-configured project creation prompts, which use a scaffolded template with our best practices built in: ![Octopus AI Assistant pre-configured project options](/docs/img/octopus-ai-assistant/octopus-ai-assistant-project-creation-examples-2.png) Choose one of the example prompts to create an opinionated project. In the example (below), the prompt **Create an Azure Web App project called "Azure Web App"** is selected. This can be customized through the prompt based on your specific requirements. Check the [expanding on the example prompts](#expanding-on-the-example-prompts) section of the documentation. The Octopus AI Assistant may take 60-90 seconds to generate plan for the project. When it has generated the resource configuration, the output of `terraform plan` will be displayed so you can see all resources that will be created. You can approve or abort. ![Deploying an Azure web app project with the Octopus AI Assistant](/docs/img/octopus-ai-assistant/octopus-ai-assistant-project-create-azure-webapp.png) After the project is created, the next step is to create and deploy a release to validate the project setup. The deployment logs provide instructions and links to help you customize your project further. ## Validating the project configuration You'll find the newly deployed project in the list of projects on the dashboard. It's worth spending a few minutes in the project to look at what was created, especially in the process, runbooks, and variables. Some project deployments using the Octopus AI Assistant also deploy resources at the instance level of your Octopus instance, like Lifecycles and Accounts. Each project deployed with our best practices has a step in the process to validate your configuration, which will help guide you through any final configurations before the project is deployable. One of the first steps you should take is to create and deploy a release, and review the deployment logs: 1. Open the project you deployed with the Octopus AI Assistant 2. Click **Releases** 3. Click **Create Release** 4. Click **Save** 5. Click **Deploy to Development** 6. Click **Deploy** When the deployment completes, go to the **Task Summary** tab for the release. The important step to check is **Step 1: Validate setup**, and review the output. This step runs a predefined script to check the configuration of your Octopus Deploy environment, and highlights any steps you need to take before you can run a deployment using this project. If we tell you an element hasn't been configured, we also provide you with a link to the documentation on how to configure it. ![Octopus AI Assistant pre-configured project options](/docs/img/octopus-ai-assistant/octopus-ai-assistant-project-create-validate-setup.png) You can also use the Octopus AI Assistant to help guide you through these configuration items. Treat the assistant like any other large language model chatbot. For example, you could ask: ```text Can you help me configure an azure service principal for use from Octopus Deploy ``` The Octopus AI Assistant will break down the steps you need to take in Azure and Octopus Deploy to create and configure the service principal. ## Expanding on the example prompts We provide example prompts for project creation in the Octopus AI Assistant to help you get from zero to fully configured project quickly. Our default project prompts provide a starting point for what we believe great deployments look like, but we understand you will have variations on what we provide by default, for the project to work in your environment. You can expand on the example prompts with variations to configure the project based on your requirements. For example, you can ask the Octopus AI Assistant to configure an additional environment, and to place the project in an existing project group: ```text Create an AWS Lambda project called "My Lambda App" in the project group "Banking". Create an environment called "QA". Include the "QA" environment in the project lifecycle before the "Production" environment. ``` You may want to modify the default steps in the deployment process: ```text Create an AWS Lambda project called "Gift Card" in the project group "Retail". Create an additional step in the deployment process called "Run smoke tests". The step should be a bash script and should test a HTTP endpoint returns a 200 status code. Add the step after the Deploy a Lambda step in the deployment process. Ensure the new step doesn't run in the Security environment. ``` Using the Octopus AI Assistant to combine our predefined project configurations with your organization specific requirements, means you can have a fully functioning project in minutes, rather than hours. # Getting started with Cloud Source: https://octopus.com/docs/octopus-cloud/getting-started-with-cloud.md ## Create an Octopus account \{#create-an-octopus-account} An Octopus account lets you manage your Octopus Cloud instances. You can register an account at [octopus.com/register](https://octopus.com/register). You can sign up with your existing Google, Microsoft, or GitHub account, or create a unique login for Octopus: 1. Enter your name. 2. Provide your email address and create a password. 3. On the next screen, verify your email address. 4. After you verify your email, you’ll be signed into your Octopus account. After you create an Octopus account, you can create a new Octopus Cloud instance. ## Create a Cloud instance \{#create-a-cloud-instance} To create a new Octopus Cloud instance, make sure you’re signed in to your Octopus account: 1. Go to [octopus.com/free-signup](https://octopus.com/free-signup). 2. Enter the URL you’d like to use to access your instance. If that URL isn’t available, try another one. 3. Select the region where you’d like to host your instance. 4. Add your company name. 5. Review the terms of our customer agreement, privacy policy, and acceptable usage policy. 6. Click **Start using Octopus**. You'll see the account provisioning screen. Your Octopus Cloud instance should be ready within a minute. We’ll also email you when it’s ready to use. ## Access your Octopus Cloud instance \{#access-your-octopus-cloud-instance} You can access your Octopus Cloud instance using the URL you chose during registration. In this example URL, `your-url` is the part of the URL you provided: ```html https://your-url.octopus.app/app#/users/sign-in ``` ## Manage user access \{#manage-user-access} ### Invite users via Control Center \{#invite-users-via-control-center} To invite a user to your Octopus Cloud instance: 1. Navigate to your Cloud instance in [Control Center](https://billing.octopus.com/). 2. Click **User Access** in the left sidebar. 3. Click **Invite User**. 4. Enter the user’s details. 5. Select which role to give the user ([see role permissions below](#role-permissions)). 6. Click **Invite**. :::figure ![User Access page in Control Center](/docs/img/octopus-cloud/images/control-center-access-control.png) ::: #### Email invitation \{#email-invitation} The invited user will receive an email invitation. If they already have an [Octopus ID](/docs/security/authentication/octopusid-authentication) (Octopus Deploy account), they just need to click **Accept invite** in the email to gain access to the subscription and then click **Sign in** to view the Octopus instance. Otherwise, they will first need to **Register** a new account using the email address the invitation was sent to. #### Role permissions \{#role-permissions} | | Cloud Subscription Owner | Cloud Subscription User (Contributor) | Cloud Subscription User (Base) | | --------------------------- | ---------------------------------------- | ------------------------------------------------ | ------------------------------------------------ | | **Control Center**
(billing.octopus.com)
| View Overview
Manage Billing
Manage Configuration
Manage User Access
| View Overview | View Overview | | **Octopus Instance**
(example.octopus.com)
| “Octopus Managers” team
By default, the user has full permissions across all spaces.
| “Space Managers” team
By default, the user has full permissions in the “Default” space only.
If you delete the “Default” space, the user will be added to the “Everyone” team.
| “Everyone” team
By default, the user can sign in but can't view or do anything.
| ### Manage user permissions in Octopus Cloud \{#manage-user-permissions-in-octopus-cloud} Invited users are only added to an Octopus Cloud instance after their first sign-in. To manage a newly invited user's permissions, you will need to ask them to sign in to your Octopus Cloud instance first. Octopus uses teams and user roles to manage permissions. After the invited user first signs in, they are automatically assigned to one of these teams: - “Octopus Managers” team (Cloud Subscription Owner) - “Space Managers” team (Cloud Subscription User (Contributor)) - “Everyone” team (Cloud Subscription User (Base)) By default, the “Everyone” team includes no user roles, so users assigned to this team can sign in to your Octopus Cloud instance but won’t have permission to see or do anything. :::div{.hint} To give an “Everyone” team user (Cloud Subscription User (Base)) the correct permissions, you can do one or both of the following in your Octopus Cloud instance: - Edit the “Everyone” team to include the minimum permissions all users need. - Add the user to other teams with the correct permissions after they first sign in. ::: To adjust a user's permissions, you can add them to other teams: 1. In your Octopus Cloud instance, go to **Configuration** (the cog icon in the bottom left sidebar). 2. Select **Teams**. 3. Create a new team or select an existing one, such as **Space Managers**. 4. Select **Add Member**. 5. Select the user from the dropdown list. 6. Click **Add**, then **Save**. Learn about best practices for [users, roles, and teams](/docs/best-practices/octopus-administration/users-roles-and-teams). :::figure ![Teams page in Octopus Cloud instance](/docs/img/octopus-cloud/images/cloud-instance-teams.png) ::: ## Set the maintenance window \{#set-the-maintenance-window} To keep Octopus Cloud running smoothly, we use outage windows to perform updates. To minimize disruptions to your deployments, please pick a two-hour [maintenance window](/docs/octopus-cloud/maintenance-window) outside of your regular business hours. 1. Navigate to your Cloud instance in [Control Center](https://billing.octopus.com/). 2. Click **Configuration**. 3. Locate the Maintenance Window section and click **Change Window**. 4. Select a two-hour window. 5. Click **Submit** to save your new maintenance window. ## Change your password \{#change-password} To change your password for the Octopus instance and Octopus account: 1. Navigate to [Control Center](https://billing.octopus.com/). 2. In the top-right corner, click the dropdown menu next to your username. 3. Click **Profile**. 4. Click **Change Password**. 5. Enter your new password. 6. Confirm the new password, and click **Change password**. ## Reset your password \{#reset-password} To reset your password: 1. Go to [octopus.com/signin](https://octopus.com/signin). 2. Click **Forgot your password?** 3. Reset your credentials and sign into your Octopus account. ## Find your Octopus Cloud version \{#octopus-cloud-version} Octopus Cloud is always up to date with the latest version of Octopus Deploy. To check which version your instance is running: 1. Sign in to your Octopus Cloud instance. 2. In the bottom-left corner, look for the Octopus logo or your Gravatar (if set up). 3. Click the Octopus logo (or Gravatar) to open the menu. 4. The version appears under your name. ## Learn more \{#learn-more} Learn more about [managing Octopus subscriptions](/docs/getting-started/managing-octopus-subscriptions). # octopus account ssh Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-ssh.md Manage SSH Key Pair accounts in Octopus Deploy ```text Usage: octopus account ssh [command] Available Commands: create Create a SSH Key Pair account help Help about any command list List SSH Key Pair accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account ssh [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account ssh list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Artifacts Source: https://octopus.com/docs/octopus-rest-api/examples/artifacts.md You can use the REST API to create and download [artifacts](/docs/projects/deployment-process/artifacts) in Octopus. Typical tasks can include: - [Upload Artifact to Existing Deployment](/docs/octopus-rest-api/examples/artifacts/create-and-upload-artifacts) - [Download Deployment Artifact](/docs/octopus-rest-api/examples/artifacts/download-deployment-artifacts) - [Download Runbook Artifact](/docs/octopus-rest-api/examples/artifacts/download-runbook-artifacts) # Octopus.Client Source: https://octopus.com/docs/octopus-rest-api/octopus.client.md Octopus.Client is an [open source](https://github.com/OctopusDeploy/OctopusClients) .NET library that makes it easy to write C# programs or PowerShell scripts that manipulate the [Octopus Deploy REST API](/docs/octopus-rest-api). Because the Octopus Deploy application itself is built entirely on the API, any programming language that can make HTTP requests to the API can do anything that could be done by a user of the application itself. Octopus.Client is [published on NuGet](https://www.nuget.org/packages/Octopus.Client). The package contains both a .NET Framework build as well as a .NET Standard build. The .NET Framework build targets 4.6.2, and the .NET Standard build is cross-platform and compatible with a [variety of runtimes](https://learn.microsoft.com/en-us/dotnet/standard/net-standard), including .NET Core and can be used from PowerShell Core. :::div{.hint} Details for where to find the API and how to authenticate can be found in our [REST API overview](/docs/octopus-rest-api) page, and [Swagger documentation is also available](https://demo.octopus.app/swaggerui/index.html). ::: ## Octopus.Client Examples We have many examples showing how to use Octopus.Client in both our [API examples](/docs/octopus-rest-api/examples) and the [OctopusDeploy-API GitHub repository](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client). # Working with Resources Source: https://octopus.com/docs/octopus-rest-api/octopus.client/using-resources.md You can load, modify and save resources using the different `Repository` classes provided in the Octopus.Client library. The following example retrieves a [deployment target](/docs/infrastructure/deployment-targets), names it `Test Server 1` and then saves it:
PowerShell ```powershell $machine = $repository.Machines.Get("machines-1"); $machine.Name = "Test Server 1"; $repository.Machines.Modify($machine); ```
C# ```csharp // Sync var machine = repository.Machines.Get("machines-1"); machine.Name = "Test Server 1"; repository.Machines.Modify(machine); // Async var machine = await repository.Machines.Get("machines-1"); machine.Name = "Test Server 1"; await repository.Machines.Modify(machine); ```
The repository methods all make direct HTTP requests. There's no "session" abstraction or transaction support. # Build server integration Source: https://octopus.com/docs/packaging-applications/build-servers.md CI/CD refers to continuous integration and continuous deployment. A typical CI/CD pipeline involves a continuous integration server (or build server) and a continuous deployment server, such as Octopus. The continuous integration/build server compiles your code into one or more artifacts and runs tests against them. The continuous deployment server takes the compiled artifacts from a successful build and deploys them through the deployment pipeline which might consist of the following environments, **Dev**, **Test**, and **Production**. ## A typical CI/CD pipeline {#typical-cicd-pipeline} A typical CI/CD pipeline with Octopus Deploy looks like this: 1. A developer commits code changes to version control. 1. The build server detects a change and performs the continuous integration build, which includes resolving dependencies, running unit tests, packaging the software and making it available in a [package repository](/docs/packaging-applications/package-repositories). 1. Octopus Deploy is notified of a new artifact in the package repository and executes the deployment process to create a release that is deployed to the **Dev** environment. 1. When a team member (perhaps a tester) wants to see what is in a particular release, they use Octopus to manually deploy a release to the **Test** environment. 1. When the team is satisfied with the quality of the release and they are ready for it to go to production, they use Octopus to promote the release from the **Test** environment to the **Production** environment. :::div{.hint} To learn more on how to package your software using your CI server of choice and deploy software to your specific deployment targets, please see our [End-to-End CI/CD pipeline tutorial](https://octopus.com/docs/guides). ::: ## Octopus build server integrations {#build-server-integrations} The following tools are available to integrate your continuous integration/build server with Octopus Deploy: - [AppVeyor](/docs/packaging-applications/build-servers/appveyor) - [Azure DevOps & Team Foundation Server](/docs/packaging-applications/build-servers/tfs-azure-devops) - [Bamboo](/docs/packaging-applications/build-servers/bamboo) - [BitBucket Pipelines](/docs/packaging-applications/build-servers/bitbucket-pipelines) - [Codefresh Pipelines](/docs/packaging-applications/build-servers/codefresh-pipelines) - [Continua CI](/docs/packaging-applications/build-servers/continua-ci) - [Github Actions](/docs/packaging-applications/build-servers/github-actions) - [Jenkins](/docs/packaging-applications/build-servers/jenkins) - [TeamCity](/docs/packaging-applications/build-servers/teamcity) Octopus supports uploading [Build information](/docs/packaging-applications/build-servers/build-information) from your build server, manually or with the use of one of the plugins, to Octopus Deploy. # Jenkins Pipeline projects Source: https://octopus.com/docs/packaging-applications/build-servers/jenkins/pipeline.md This page lists the arguments you can supply to the Octopus Jenkins Pipelines commands to run against your Octopus Deploy server. The Jenkins Pipeline support requires plugin version 3.0.0 or later and Jenkins version 2.190.1 or later. :::div{.warning} The `toolId` parameter refers to the **Name** of the Global Tool Configuration for Octopus CLI, available at **Manage Jenkins ➜ Global Tool Configuration**. The `serverId` parameter refers to the **Server ID** of the OctopusDeploy Plugin configuration, available at **Manage Jenkins ➜ Configure System**. ::: ## Pack {#pack} Step name: **_octopusPack_** _**octopusPack** allows you to create a package from files on disk during your pipeline execution_. | Parameters | Required | Description | |-----------------|----------|-------------| | `toolId` | Yes | The configured Octopus CLI tool to use. | | `packageId` | Yes | The ID of the package. | | `packageFormat` | Yes | The format of the package, `zip` or `nupkg`. | | `sourcePath` | Yes | Path containing files and directories to include in package. | | `overwriteExisting` | No | Overwrite an existing package with the same name and version. Valid values are `true` or `false`. Defaults to `false`.| | `includePaths` | No | New-line separated paths to include files. | | `outputPath` | No | Path to write final package. Defaults to `.`. | | `packageVersion` | No | Package version, defaults to a timestamp-based version. | | `verboseLogging` | No | Turn on verbose logging. Valid values are `true` or `false`. | | `additionalArgs` | No | Additional arguments to pass to the Octopus CLI [pack](/docs/octopus-rest-api/octopus-cli/pack) command. | Example: ```powershell octopusPack \ additionalArgs: '-author "My Company"', \ outputPath: './artifacts/', \ overwriteExisting: false, \ packageFormat: 'zip', \ packageId: 'OctoPetShop', \ packageVersion: '1.1.${BUILD_NUMBER}', \ sourcePath: './bin/Release/publish/', \ toolId: 'octocli', \ verboseLogging: false ``` ## Push {#push} Step name: **_octopusPushPackage_** _**octopusPushPackage** allows you to push packages to the package repository in an Octopus Server_. | Parameters | Required | Description | |-----------------|----------|-------------| | `toolId` | Yes | The configured Octopus CLI tool to use. | | `serverId` | Yes | The configured Server ID of the target server to push the package. | | `spaceId` | Yes | The ID of the Space on the server to push the package. | | `packagePaths` | Yes | The path to the package. | | `overwriteMode` | Yes | Valid values are `FailIfExists`, `OverwriteExisting` or `IgnoreIfExists`. | | `verboseLogging` | No | Turn on verbose logging. Valid values are `true` or `false`. | | `additionalArgs` | No | Additional arguments to pass to the Octopus CLI [push](/docs/octopus-rest-api/octopus-cli/push) command. | Example: ```powershell octopusPushPackage \ overwriteMode: 'FailIfExists', \ packagePaths: './artifacts/OctoPetShop.1.1.${BUILD_NUMBER}.zip', \ serverId: 'octopus-server', \ spaceId: 'Spaces-1', \ toolId: 'octocli' ``` Examples for the `packagePaths` parameter: ### Absolute path The path to the package can be provided as an absolute path on the Jenkins server or Agent. `${WORKSPACE}` is the directory which the job runs within. - `packagePaths: "${WORKSPACE}/artifacts/Package.0.0.${BUILD_NUMBER}.zip"`. - `packagePaths: "/home/jenkins/workspace/artifacts/Package.0.0.${BUILD_NUMBER}.zip"`. ### Relative path The path is a relative path from the `WORKSPACE` directory. - `packagePaths: "artifacts/Package.0.0.${BUILD_NUMBER}.zip"` ### Glob Patterns The package selection can also be done using ANT glob patterns. - `packagePaths: "artifacts/**/*.0.0.${BUILD_NUMBER}.zip"`. - This will pick up all the packages, in all folders under the `artifacts` directory with a name matching the `0.0` version and current build number. ### Multiple paths The `packagePaths` parameter also supports multiple values from the above options separated by a `\n` character. - `packagePaths: "artifacts/package1/Package1.0.0.${BUILD_NUMBER}.zip\nartifacts/package2/Package2.0.0.${BUILD_NUMBER}.zip"` ## Push package info {#build-information} Step: **_octopusPushBuildInformation_** _**octopusPushBuildInformation** allows you to push package information to an Octopus Server_. | Parameters | Required | Description | |-----------------|----------|-------------| | `toolId` | Yes | The configured Octopus CLI tool to use. | | `serverId` | Yes | The configured Server ID of the target server to push the build information. | | `spaceId` | Yes | The ID of the Space on the server to push the build information. | | `packageId` | Yes | The ID of the packages to push the version information, multiple values can be provided separated by `\n`. | | `commentParser` | Yes | Valid values are `GitHub` and `Jira`. | | `overwriteMode` | Yes | Valid values are `FailIfExists`, `OverwriteExisting` or `IgnoreIfExists`. | | `gitUrl` | No | The URL of the repository for the package(s). | | `gitBranch` | No | The branch that was checked out in the repository. Available via `git checkout`. | | `gitCommit` | No | The commit ID of the most recent commit on the branch. Available via `git checkout`. | | `verboseLogging` | No | Turn on verbose logging. Valid values are `true` or `false`. | | `additionalArgs` | No | Additional arguments to pass to the Octopus CLI [build-information](/docs/octopus-rest-api/octopus-cli/build-information) command.| Example: ```powershell octopusPushBuildInformation \ toolId: 'octocli', \ serverId: 'octopus-server', \ spaceId: 'Spaces-1', \ commentParser: 'GitHub', \ overwriteMode: 'FailIfExists', \ packageId: 'OctoPetShopService', \ packageVersion: '1.2.${BUILD_NUMBER}', \ verboseLogging: false, \ additionalArgs: '--debug', \ gitUrl: 'https://github.com/OctopusSamples/OctoPetShop', \ gitBranch: '${GIT_BRANCH}', \ gitCommit: '${GIT_COMMIT}' ``` Due to _limitations in Jenkins Pipelines_, you will need to pass the *Git URL*, *Git Branch* and *Git Commit* values to the `octopusPushBuildInformation`. Including these values will allow the build information to provide correct URL links to the source. For a pipeline source from SCM, set the parameters to `gitUrl: '${GIT_URL}' gitBranch: '${GIT_BRANCH}' gitCommit: '${GIT_COMMIT}'`, the `checkoutVars` script will not be required. For a inline pipeline definition configure the step as: ```powershell steps { script { def checkoutVars = checkout([$class: 'GitSCM', userRemoteConfigs: [[url: 'https://github.com/OctopusSamples/RandomQuotes-Java.git']]]) env.GIT_URL = checkoutVars.GIT_URL env.GIT_BRANCH = checkoutVars.GIT_BRANCH env.GIT_COMMIT = checkoutVars.GIT_COMMIT } octopusPushBuildInformation commentParser: 'GitHub', overwriteMode: 'FailIfExists', packageId: 'randomquotes', packageVersion: "1.0.${BUILD_NUMBER}", serverId: "octopus-server", spaceId: "Spaces-2", toolId: 'Default', gitUrl: "${GIT_URL}", gitBranch: "${GIT_BRANCH}", gitCommit: "${GIT_COMMIT}" } ``` ## Create release {#create-release} Step: **_octopusCreateRelease_** _**octopusCreateRelease** allows you to push packages to the package repository in an Octopus Server_. | Parameters | Required | Description | |----------------------|----------|-------------| | `toolId` | Yes | The configured Octopus CLI tool to use. | | `serverId` | Yes | The configured Server ID of the target server to create the release in. | | `spaceId` | Yes | The ID of the space on the server to create the release in. | | `project` | Yes | The ID of the project to create the release in. | | `releaseVersion` | Yes | The version number for the release. | | `channel` | No | The name of the target channel. Defaults to `Default` channel. | | `packageConfigs` | No | Collection of package versions to set when creating the release. | | `defaultPackageVersion` | No | The default version to use for packages associated with the release. | | `deployThisRelease` | No | Deploy release after creation. Valid values are `true` or `false`. Defaults to `false`. | | `waitForDeployment` | No | Wait for deployment to complete before continuing. Valid values are `true` or `false`. Defaults to `false`. | | `cancelOnTimeout` | No | Cancel the deployment after the `waitForDeployment` time. Valid values are `true` or `false`. Defaults to `false`. | | `tenant` | No | The tenant to deploy the release to. | | `tenantTag` | No | The tenant tag to deploy the release to. | | `deploymentTimeout` | No | How long to wait for deployment. Format is `HH:mm:ss`. Default is `00:10:00`. | | `environment` | Conditional | The environment to deploy release to. Required if `deployThisRelease` is `true`. | | `jenkinsUrlLinkback` | No | Include link to the Jenkins build that created the release. Valid values are `true` or `false`. Default is `false`. | | `releaseNotes` | No | Include release notes in release. Valid values are `true` or `false`. Default is `false`. | | `releaseNotesSource` | No | Valid values are `file` or `scm`. | | `releaseNotesFile` | Conditional | The file path for release notes, required if `releaseNotesSource` is `file`. | | `verboseLogging` | No | Turn on verbose logging. Valid values are `true` or `false`. | | `additionalArgs` | No | Additional arguments to pass to the Octopus CLI [create-release](/docs/octopus-rest-api/octopus-cli/create-release) command. | Example: ```powershell octopusCreateRelease \ serverId: 'octopus-server', \ spaceId: 'Spaces-1', \ project: 'Random Quotes', \ releaseVersion: '2.3.${BUILD_NUMBER}', \ toolId: 'octocli', \ packageConfigs: [[packageName: 'Nuget.CommandLine', packageReferenceName: 'NugetCLI', packageVersion: '5.5.1']], \ deployThisRelease: true, \ cancelOnTimeout: false, \ deploymentTimeout: '00:15:00', \ environment: 'test', \ tenant: 'The Tenant', \ tenantTag: 'importance/high', \ jenkinsUrlLinkback: true, \ releaseNotes: true, \ releaseNotesSource: 'scm' ``` ## Deploy release {#deploy-release} Step: **_octopusDeployRelease_** _**octopusDeployRelease** allows you to push packages to the package repository in an Octopus Server_. | Parameters | Required | Description | |----------------------|----------|-------------| | `toolId` | Yes | The configured Octopus CLI tool to use. | | `serverId` | Yes | The configured Server ID of the target server to deploy the release. | | `spaceId` | Yes | The ID of the Space on the server to deploy the release. | | `project` | Yes | The ID of the project to deploy the release. | | `environment` | Yes | Environment to deploy release. | | `releaseVersion` | Yes | The version number for the release. | | `cancelOnTimeout` | No | Cancel the deployment after the `waitForDeployment` time. Valid values are `true` or `false`. Defaults to `false`. | | `tenant` | No | The tenant to deploy the release to. | | `tenantTag` | No | The tenant tag to deploy the release to. | | `waitForDeployment` | No | Wait for deployment to complete before continuing. Valid values are `true` or `false`. | | `deploymentTimeout` | No | How long to wait for deployment. Format is `HH:mm:ss`. Default is `00:10:00`. | | `variables` | No | Set prompted variable values. Format is `key1=value1\nkey2=value2`. | | `verboseLogging` | No | Turn on verbose logging. Valid values are `true` or `false`.| | `additionalArgs` | No | Additional arguments to pass to the Octopus CLI [deploy-release](/docs/octopus-rest-api/octopus-cli/deploy-release) command.| Example: ```powershell octopusDeployRelease \ toolId: 'octocli', \ serverId: 'octopus-server', \ spaceId: 'Spaces-1', \ project: 'OctoPetShop', \ environment: 'test', \ releaseVersion: '1.2.${BUILD_NUMBER}', \ deploymentTimeout: '00:05:00', \ waitForDeployment: false, \ cancelOnTimeout: true ``` # Azure DevOps Source: https://octopus.com/docs/packaging-applications/build-servers/tfs-azure-devops.md Octopus Deploy integrates with Azure DevOps to provide for a full automated build and deployment pipeline. This section provides information about how to integrate Octopus Deploy and the various versions of Microsoft's build server. :::figure ![](/docs/img/packaging-applications/build-servers/tfs-azure-devops/images/5672461.png) ::: ## Supported Azure DevOps versions Depending on the version of Azure DevOps you are using, the recommended approach for a successful integration with Octopus may vary. Use the below chart to pick the right approach for your build server version. | Version | Recommended approach | Notes | | --------------------------- | ---------------------------------------- | ---------------------------------------- | | Azure DevOps Services | [Octopus Extension for Azure DevOps Services](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension) | This is the hosted version of Azure DevOps. Our integration will always aim to be compatible with this offer. | | Azure DevOps Server | [Octopus Extension for Azure DevOps Server](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension) | This is the on-premises offering for Azure DevOps. We will try our best to be compatible with this offer. | :::div{.hint} Microsoft has renamed Visual Studio Team Foundation Server (TFS) to Azure DevOps Server with the introduction of Azure DevOps Server 2019. The guidance provided in this document applies to supported versions of TFS. For more information about our support for TFS, see [Azure DevOps and TFS Extension Version Compatibility](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/extension-compatibility). ::: ## Learn more - Generate an Octopus guide for [Azure DevOps and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?buildServer=Azure%20DevOps%2FTFS). # Docker Container Registry Source: https://octopus.com/docs/packaging-applications/package-repositories/docker-registries.md A [Docker Registry](https://docs.docker.com/registry/) is treated in Octopus Deploy as a feed that supplies images that are run as containers on a Docker Engine host. :::div{.success} See an example deployment using Docker Registries in our guide: [Docker run with networking](/docs/deployments/docker/docker-run-with-networking). ::: ## Using Docker registries in Octopus Deploy In Octopus Deploy, Docker registries are treated very similarly to [Package Repositories](/docs/packaging-applications/package-repositories), and Images are treated very similarly to Packages. Octopus Deploy supports the Docker Registry [Version 1](https://docs.docker.com/v1.6/reference/api/registry_api/) and [Version 2](https://docs.docker.com/registry/spec/api/) API specifications as outlined in the Docker reference files. You can access Docker Registries with or without using credentials, depending on registry configuration. You can use one of the hosted public registries, like [Docker Hub](https://hub.docker.com/), or you can host your own [Private Registry](/docs/packaging-applications/package-repositories/docker-registries). ### How Octopus Server and deployment targets integrate with Docker Registries The Docker Registries you configure need to be accessed by both the Octopus Server and your [deployment targets](/docs/infrastructure). The Octopus Server will contact your registry to obtain information on available images while designing and maintaining your projects. During deployment the `docker pull` command will be executed on the deployment targets themselves and they will pull the Images directly from the Docker Registry. ## Docker registry API version discovery {#version-discovery} When you add your Docker Registry as a feed in Octopus Deploy, Octopus will attempt to detect and connect using the appropriate version based on specifications outlined in the relevant Docker API documentation. If your registry does not support the API correctly, it is possible that the connection will not be able to take place. We advise you to click _Save and Test_ once you have entered the registry details to allow the version detection to take place and confirm that your credentials are correct. According to the Docker API documentation, the [version 1](https://docs.docker.com/v1.6/reference/api/registry_api/) API should have a `/_ping` endpoint which will respond with a `X-Docker-Registry-Version` HTTP header in the response. Similarly, the [version 2](https://docs.docker.com/registry/spec/api/) API expects a `Docker-Distribution-API-Version` HTTP header with a value of `registry/2.0`. Both of these endpoints are expected to be located at an absolute path of either `/v1` or `/v2` from the host. :::div{.success} **Accessing Docker registries from different security zones** It is possible that the URI to the Docker Registry will be different for the Octopus Server and the deployment targets. You can use the *Registry Path* field when configuring the Docker Registry in Octopus to provide an alternative URI to use on the deployment target. ::: ### Working with Docker container images in Octopus Docker images with the same name are grouped together and referred to (in Docker terminology) as a **repository**. This is very similar to how Octopus, and other package managers like NuGet, treat Packages with the same Name or ID. When you configure a Docker step in Octopus you choose an Image by its Name, just like you would choose a Package ID for any of the other [supported packages](/docs/packaging-applications/#supported-formats). :::figure ![](/docs/img/packaging-applications/package-repositories/docker-registries/images/5865827.png) ::: When you create a release in Octopus, you need to choose the "version" of the Image(s) you want as part of the release. Octopus will load the Tags for the Image(s) and attempt to parse them as an [Octopus Version](https://oc.to/OctopusVersionRegex/). :::figure ![](/docs/img/packaging-applications/package-repositories/docker-registries/images/5865828.png) ::: :::div{.hint} **Container images are downloaded directly by the Deployment Target or Worker** Octopus Deploy does not currently support functionality to push Images from the Octopus Server to the deployment targets in the same way that it does with other [supported packages](/docs/packaging-applications/#supported-formats). That being said, the layered architecture of images allows your deployment targets to retrieve only those pieces that have changed from previous versions that are locally available, which is behavior built in to the Docker Engine. ::: ## Private registry {#private-registry} The simplest way to host your own private v2 Docker Registry is to run the run a container from the official registry image! ```bash docker run -d -p 5000:5000 --name registry registry:2 ``` This image supports custom storage locations, certificates for HTTPS and authentication. For more details on setting up the registry checkout the [official docs](https://docs.docker.com/registry/deploying/). ## Other registry options There are many other options for private registries such as self hosting through [Docker Trusted Registry](https://docs.docker.com/docker-trusted-registry/) or [Artifactory](https://jfrog.com/artifactory/), or using a cloud provider like [Azure](https://azure.microsoft.com/en-au/services/container-registry/), [Cloudsmith](https://www.cloudsmith.com), [AWS](https://aws.amazon.com/ecr/) or [Quay](https://quay.io/). We have provided further details on setting up a Octopus Feed to the following Docker Registries: - [Docker Hub](/docs/packaging-applications/package-repositories/guides/container-registries/docker-hub) - [Azure Container Services](/docs/packaging-applications/package-repositories/guides/container-registries/azure-container-services) - [Amazon EC2 Container Services](/docs/packaging-applications/package-repositories/guides/container-registries/amazon-ec2-container-services) - [Cloudsmith](/docs/packaging-applications/package-repositories/guides/cloudsmith-feed) ### Known limitations In the current version of ProGet (version 4.6.7 (Build 2)), their Docker Registry Feed does not expose the full Docker API and is missing the [_catalog endpoint](https://docs.docker.com/registry/spec/api/#/listing-repositories) which is required to list the available packages for release selection. It has been indicated that this may change in a future release. Authentication to [GitLab container registries using a GitLab deploy token](https://github.com/OctopusDeploy/Issues/issues/8156) is not supported. Although a search feature is available in the v1 registry API, as of the time of writing there is no built-in search ability in the v2 specifications. There are ongoing discussions around an open [GitHub ticket](https://github.com/docker/distribution/issues/206) in the Docker registry Github repository however there is no clear indication if one will be provided due to changes in the philosophy behind the registry responsibilities. The current workaround, and one that Octopus Deploy uses when a v2 Docker registry is provided, is to retrieve the full catalog via the [/v2/\_catalog](https://docs.docker.com/registry/spec/api/#/listing-repositories) endpoint and search for the required image locally. ## Troubleshooting Registry Connections ## If your Octopus Deploy instance is having problems trying to connect with your Docker Registry when running the **Save and Test** operation, it may failing due to reasons outside the control of Octopus Deploy. Try to connect to your registry directly through the browser from the same machine that Octopus is hosted on. Use the feed url you provided and ensure that either `/v1` or `/v2` is appended to the end of the path depending on what version of the Docker Registry API you are running. If the connection is valid then you should receive a `200` response, possibly receiving a user auth challenge (see API details above under [Docker Registry API version discovery](#version-discovery)). If this does not occur then you may be having issues with your registry or network which you may need to fix before using through Octopus Deploy. ## Learn more - Generate an Octopus guide for [Docker Registries and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?packageRepository=Docker%20Registry). - [Docker blog posts](https://octopus.com/blog/tag/docker/1). - [Linux blog posts](https://octopus.com/blog/tag/linux/1). # Artifactory container registry Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries/artifactory-container-registry.md Artifactory offers both self-hosted and cloud instances, both of which are capable of hosting [Docker registries](https://jfrog.com/help/r/jfrog-artifactory-documentation/jfrog-container-registry). The process for adding a Docker registry for either type is the same. ## Adding Artifactory as an Octopus External Feed To use an Artifactory Docker registry in Octopus Deploy, create an external feed with the following settings: - **Feed Type:** Docker Container Registry - **Name:** Artifactory-Docker (or anything else that makes sense to you) - **URL:** Artifactory registry URLs are constructed in 3 parts: - The base instance URL: e.g. `https://my-company.jfrog.io/artifactory` - The Docker API path: `/api/docker` - The repository name: e.g. `my-local-repo` The example values above would result in the value: `https://my-company.jfrog.io/artifactory/api/docker/my-local-repo` for use in the **URL** field. - **Registry Path:** Artifactory registry paths are constructed in 2 parts: - The artifactory instance URL e.g. `my-company.jfrog.io` - The repository name e.g. `my-local-repo` The example values above would result in the value: `my-company.jfrog.io/my-local-repo` for use in the **Registry Path** field. - **Credentials:** By default, Artifactory requires a valid username and password/[access token](https://jfrog.com/help/r/jfrog-platform-administration-documentation/access-tokens) combination to access the registry. However, anonymous authentication for reading from a registry [can be enabled](https://jfrog.com/help/r/how-to-perform-anonymous-pulls-but-require-authentication-for-pushing-to-a-docker-repository) with additional configuration in your Artifactory instance. ![Artifactory Registry Feed](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/artifactory-docker-feed.png) # GitLab Maven repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/maven-repositories/gitlab-maven-feed.md GitLab creates a Maven Registry for each Project or Group. To add the Maven Registry to Octopus Deploy as an external feed, you will first need to get the Project or Group Id Project Id :::figure ![GitLab Project Id](/docs/img/packaging-applications/package-repositories/guides/images/gitlab-project-id.png) ::: Group Id :::figure ![GitLab Group Id](/docs/img/packaging-applications/package-repositories/guides/images/gitlab-group-id.png) ::: ## Adding a GitLab Maven repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `Maven Feed` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the feed for your GitLab Project or Group in the format: Project: `https://your.gitlab.url/api/v4/projects/[project id]/packages/packages/maven` Group: `https://your.gitlab.url/api/v4/groups/[group id]/-/packages/maven` Replace the URL from the examples above. :::figure ![GitLab NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/gitlab-maven-feed.png) ::: Optionally add Credentials if they are required. # NuGet repositories Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories.md This section provides instructions how to set-up a number of NuGet repositories from third-parties as external feeds for use within Octopus. # Azure DevOps and TFS package management Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories/tfs-azure-devops.md With Azure DevOps and TFS package management, Octopus can consume either v2 or v3 NuGet feeds. Learn more about [Azure DevOps or TFS Package Management](https://www.visualstudio.com/en-us/docs/package/overview). ## Adding an Azure DevOps NuGet feed as an Octopus External Feed If you are using Azure DevOps package management, Octopus can consume either the v2 or v3 NuGet feeds. - To connect to the v3 URL, you must use a [Personal Access Token](https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate) (PAT) in the password field. The username field is not checked, so you can put anything in here as long as it is not blank. Ensure that the token has (at least) the *Packaging (read)* scope. - To connect to the v2 URL, you can use either [alternate credentials or a Personal Access Token](https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate) in the password field. ## Adding a TFS NuGet feed as an Octopus External Feed If you are using TFS Package Management, Octopus can consume either the v2 or v3 NuGet feeds. Use a user account's username and password to authenticate. Although the TFS documentation states that a Personal Access Token can be used, we have not had success authenticating using one with `NuGet.exe`. # Manual intervention and approval step Source: https://octopus.com/docs/projects/built-in-step-templates/manual-intervention-and-approvals.md While fully automated deployment is a great goal, there are times when a human needs to be involved in the deployment process. For instance: - To provide sign off/approval before a deployment proceeds. - To manually check the homepage of a newly deployed site works before making it live. - To perform a database upgrade or update some infrastructure in an environment where you're not allowed to automate the steps (e.g. you have to deliver your database changes to a DBA to be manually reviewed and run). - To receive sign off/approval after a deployment completes. The **Manual intervention step** is a step that can be added to deployment processes to pause the deployment to wait for a member of a specified team to either allow the deployment to proceed or to be aborted. :::div{.hint} Manual interventions result in either a success or failure outcome based on the user’s input. Subsequent steps evaluate this outcome according to their run conditions. By default, the run condition is set to "Success: only run when previous steps succeed." This means manual interventions can prevent these steps from executing, causing the deployment to fail. However, if "Always Run" is selected for subsequent steps, they will proceed regardless of the manual intervention outcome. For steps with the condition "Variable: only run when the variable expression is true," the manual intervention's outcome must be included in the variable expression to determine whether the step should run. ::: [Getting Started - Manual Intervention](https://www.youtube.com/watch?v=ePQjCClGfZQ) ## Add a manual intervention step Manual intervention steps are added to deployment processes in the same way as other steps. 1. Navigate to your [project](/docs/projects). 2. Click **Process** and **Add step** to add a step to an existing process. Alternatively, if this is a new deployment process, click the **Create process** button. 3. Find the **Manual Intervention Required** step and click **Add step**. 4. Give the step a short memorable name. 5. Provide instructions for the user to follow. For instance, "*Ensure traders are aware of the deployment.*" 6. Select which teams are responsible for the step. Note, if you don't specify a team, anybody with permission to deploy the project can perform the manual intervention. Specifying a team makes the step a required step that cannot be skipped. 7. You can set conditions to determine when the step should run. For instance: - Only run the manual intervention for specific environments. - Run the manual intervention based on the status (success or failure) of the previous step. - Wait for the previous step to complete. - Run based on the value of a variable expression. 8. Save the deployment process. ## Assigning manual interventions When a deployment is executing and a manual step is encountered, the deployment will show a status of **Waiting**. An interruption will appear at the top of the deployment summary. :::figure ![Waiting Status](/docs/img/projects/built-in-step-templates/images/waiting-status.png) ::: You can click **Show details** to view the instructions. If you are in the team of users that can take responsibility for the interruption, you'll also be able to assign the interruption to yourself by clicking **Assign to me**. :::div{.hint} Interruptions can only be assigned to one person at a time to prevent two people from accidentally performing the manual step. ::: When the interruption has been assigned to you, you can then perform the action in the instructions, and then choose to either **Proceed** (allow the deployment to continue) or **Abort** (fail and stop the deployment from continuing): When aborting a deployment, it's a good idea to write a reason into the **Notes** field, so that the rest of the team can see why the deployment was aborted. The tasks page, under the "Needs Approval" tab, contains a list of deployments pending manual intervention. In addition to the deployment page, you can **Assign**, **Proceed**, and **Abort** deployments from this list. ## Output variables When a manual step is completed, details of the interruption are saved as variables that can be used in other steps including [email](/docs/projects/built-in-step-templates/email-notifications) templates. *Step Name* below refers to the name given to the manual step. For example "*Ensure traders are aware of the deployment*". | Variable name | Contains | Example value | | --- | --- | --- | | `Octopus.Action[Step Name].Output.Manual.Notes` | The contents of the *Notes* field from the interruption form | *Checked with Rick, got the all-clear; Michelle is out at a meeting.* | | `Octopus.Action[Step Name].Output.Manual.Approved` | Indicates if the step was approved | *True* | `Octopus.Action[Step Name].Output.Manual.ResponsibleUser.Id` | The user ID of the user who submitted the interruption form | *users-237* | | `Octopus.Action[Step Name].Output.Manual.ResponsibleUser.Username` | The username of the user who submitted the interruption form | *j_jones* | | `Octopus.Action[Step Name].Output.Manual.ResponsibleUser.DisplayName` | The display name of the user who submitted the interruption form | *Jamie Jones* | | `Octopus.Action[Step Name].Output.Manual.ResponsibleUser.EmailAddress` | The email address of the user who submitted the interruption form | *jamie.jones@example.com* | ## Evaluating manual intervention output in following steps If you want to control subsequent steps based on the outcome of the manual intervention step, you can use "Variable: only run when the variable expression is true", and use the `Octopus.Deployment.Error` variable as the conditional. For example: ``` #{unless Octopus.Deployment.Error}RESULT IF MANUAL INTERVENTION PROCEEDED{/unless} ``` or ``` #{if Octopus.Deployment.Error}RESULT IF MANUAL INTERVENTION WAS ABORTED{/if} ``` ## Learn more - [Advanced manual approvals](/docs/deployments/databases/common-patterns/manual-approvals) - [Automated approvals](/docs/deployments/databases/common-patterns/automatic-approvals) - [Automated approval sample](https://samples.octopus.app/app#/Spaces-202/projects/octofx/deployments/process) - [Automatic approvals for your database deployments](https://octopus.com/blog/autoapprove-database-deployments) - [Building trust in an automated database deployment process](https://octopus.com/blog/building-trust-in-automated-db-deployments) # Performance Source: https://octopus.com/docs/projects/deployment-process/performance.md We've built Octopus to enable reliable and repeatable deployments, but that doesn’t mean it has to be slow. Octopus can scale with you as you grow. Octopus is a complex system with a core component allowing you to run your own custom scripts. We work hard to ensure that all the parts we control work efficiently, leaving as many resources as possible to run your deployments. That being said, there are many things you can do to ensure your Octopus deployments run quickly. This page is intended to help you tune and maintain your deployment processes and troubleshoot problems as they occur. :::div{.hint} Want to tune your Octopus Server for optimum performance? Read our [detailed guide on optimizing your server](/docs/administration/managing-infrastructure/performance). ::: ## Considerations By the time your deployment starts, the Octopus HTTP API and database are no longer the bottleneck. The key concerns are now: - The throughput and reliability of the connection from Octopus Server to your deployment targets. - The speed and reliability of your deployment targets. - The load your deployment targets are under while the deployment is taking place. - The number of steps in your deployment process. - The size and number of packages you are deploying. - How your packages are acquired/transferred to your deployment targets. - How many packages you keep in your package feed and how you configure retention policies. - The amount and size of log messages you write during deployment. - How many deployment targets acquire packages in parallel. - How many deployment targets you deploy to in parallel. - Whether your steps run serially (one-after-the-other) or in parallel (at the same time). - How much of the work in the deployment steps is done on the Octopus Server. - The number and size of your variables. - Other processes on the deployment target interfering with your deployment. ## Tips We don't offer a one-size-fits-all approach to optimizing your deployments using Octopus; every deployment scenario is unique. Instead, we recommend taking an experimental approach to optimization: measure-then-cut. Record your deployments, make an adjustment, then measure again, etc. These tips should give you enough information to get started. ### Optimize the connection to your deployment targets {#optimize-connection-to-targets} If you have a reliable, high throughput connection between your Octopus Server and deployment targets, your deployments can go fast. Unfortunately, the opposite is also true: - Low throughput connections make package acquisition and deployment orchestration slow. - An unreliable connection can make deployments slow through dropped connections and unnecessary retries. A reliable connection to your deployment targets is the foundation of reliable and fast deployments. If bandwidth/throughput is genuinely an invariant, you may want to consider a way of acquiring your packages prior to the deployment starting, leaving the bandwidth available for deployment orchestration. ### Optimize your deployment targets {#optimize-targets} Fast and reliable deployment targets are also a foundation for fast and reliable deployments. This is one area where every deployment is different, but the key considerations are similar: - If your deployment does a lot of work on the disk(s), make sure your disks have sufficient throughput/IOPS. - If your deployment does a lot of work in the CPU, make sure your CPU has enough throughput/cores. - If your deployment does anything, make sure your deployment target isn't already saturated running your applications, or choose a time of lower usage. :::div{.hint} If a particular operation seems slow during deployment, test that single operation on your deployment target without Octopus in the mix. Octopus adds as little overhead as possible to your deployments, so there's a good chance that operation is slow because of some kind of bottleneck on the deployment target itself. ::: ### Reduce load on your deployment targets during deployment {#reduce-target-load} If the applications running on your deployment targets are busy, this will slow down your deployment. Similarly, when you run a deployment it will be taking resources from your running applications. Octopus does not perform any kind of throttling on the deployment target - it will attempt to run your deployment process on your targets as fast as possible. One of the best ways to reduce load on your deployment targets is to temporarily remove them from the active pool of servers. For example, with web applications you can do this by removing the server from your load balancer, perhaps using a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus). If you don't want to take this kind of approach, you can safely deploy your application to an active server, but you should take some time to understand the impact this has on your running applications and the speed of your deployments. ### Optimize the size of your deployment process {#optimize-size-of-process} By their very nature, each step in your deployment process comes with an overhead. A deployment process with more steps can be easier to understand at a high level and easier to manage over time. However, more steps will result in more: - System variables - Communication overhead - Startup/teardown cost - Contention on Octopus Server resources A deployment process with a single giant step might be the most efficient approach in your scenario. Imagine a single complex step that deploys all the required packages, running a single hand-crafted script to complete your deployment all in one hit. This approach would eliminate many of the above costs but at the expense of making your deployment process harder to understand and maintain over time. There is typically a happy balance you can strike for each of your projects. The most common problem related to performance is having too many steps, where "too many" depends on your specific situation. We typically consider an average project to use 10-20 steps. But many customers deploy projects with 50-80 steps. If your projects have hundreds of steps, you may want to consider adjusting your deployment processes. - If your project could be broken down into logical components which ship on their own cadence, make each component its own project. - If your project could be broken down into logical components which ship at the same time, consider breaking your deployment into multiple logical projects and [coordinate their deployments](/docs/projects/coordinating-multiple-projects). - If your project cannot be broken down logically, consider combining some of your steps together into a single step. For example, you may be able to run your [custom scripts](/docs/deployments/custom-scripts) as a pre- or post- activity. ### Consider the size of your packages {#package-size} Size really does matter when it comes to your packages: - Larger packages require more network bandwidth to transfer to your deployment targets. - Larger packages take more resources to unpack on your deployment targets. - When using [delta compression for package transfers](/docs/deployments/packages/delta-compression-for-package-transfers), larger packages require more CPU and disk IOPS on the Octopus Server to calculate deltas - this is a tradeoff you can determine through testing. - Larger packages usually result in larger file systems on your deployment targets, making any steps which scan files much slower. For example, [substituting variables in templates](/docs/projects/steps/configuration-features/substitute-variables-in-templates) can be configured to scan every file extracted from your package. Consider whether one large package is better in your scenario, or perhaps you could split your application into multiple smaller packages, one for each deployable component. ### Consider how you transfer your packages {#package-transfer} Octopus provides two primary methods for transferring your packages to your deployment targets: - Push from the Octopus Server to your targets. - Pull from an external feed to your targets. Each option provides different performance benefits, depending on your specific scenario: - If network bandwidth is the limiting factor, consider: - pushing the package from the Octopus Server to your targets using [delta compression for package transfers](/docs/deployments/packages/delta-compression-for-package-transfers); or - using custom package feed in the same network as your deployment targets and download the packages directly on the agent. - If network bandwidth is not a limiting factor, consider downloading the packages directly on the agent. This alleviates a lot of resource contention on the Octopus Server. - If Octopus Server CPU and disk IOPS is a limiting factor, avoid using [delta compression for package transfers](/docs/deployments/packages/delta-compression-for-package-transfers). Instead, consider downloading the packages directly on the agent. This alleviates a lot of resource contention on the Octopus Server. ### Consider retention policies for your package feeds {#package-retention} Imagine you keep every package you've ever built or deployed. Over time your package feed will get slower and slower to index, query, and stream packages for your deployments. If you are using the [built-in feed](/docs/packaging-applications/package-repositories/#choose-right-repository), you can configure [retention policies](/docs/administration/retention-policies) to keep it running fast. If you are using another feed, you should configure its retention policies yourself, making sure to cater for packages you may want to deploy. ### Consider the size of your task logs {#task-logs} Larger task logs put the entire Octopus pipeline under more pressure. A good rule of thumb is to keep your log files under 20MB. We recommend printing messages required to understand progress and the reason for any deployment failures. The rest of the information should be streamed to a file, then published as a deployment [artifact](/docs/projects/deployment-process/artifacts). ### Consider how many targets acquire packages in parallel {#parallel-acquisition} Imagine you have 1,000 deployment targets configured to stream packages from the Octopus Server and you configure your deployment so all packages are acquired across all of your deployment targets in parallel. This can put a lot of strain on your Octopus Server as the constraint in this mix. Alternatively, imagine you have 1,000 deployment targets configured to download packages directly from a package feed or a file share. Now the package feed or file share becomes the constraint. Firstly, [consider how you transfer your packages](#package-transfer). Then, consider the degree of parallelism that will suit your scenario best. Octopus ships with a sensible and stable default configuration of `10`, and you can tune the degree of parallelism by setting the `Octopus.Acquire.MaxParallelism` variable as an unscoped/global value in your project. Start by slowly increasing this number until you reach a constraint for your scenario, and continue to tune from there. ### Consider how many targets you deploy to in parallel {#parallel-targets} Imagine you have a step in your deployment process that runs across all deployment targets with a specific role. If there are 1,000 deployment targets with the role, Octopus will attempt to run that step simultaneously across all 1,000 targets. This will cause your deployments to go slower since the Octopus Server spends most of its time managing concurrency. Alternatively, if you constrain your process to a single deployment target at a time, the Octopus Server and your deployment targets will be inactive the majority of the time. This also results in your deployments taking longer. Consider using a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus) to deploy to a subset of these deployment targets at any one time. Rolling deployments allow you to define a "window" which is the maximum number of deployment targets that will run the step at any one time. :::div{.info} This default behavior is sensible for smaller installations, but is an unsafe default for larger installations. We are looking to [change this default behavior in a future version of Octopus](https://github.com/OctopusDeploy/Issues/issues/3305). ::: ### Consider how many steps you run in parallel {#parallel-steps} Steps and actions in your deployment process can be configured to start after the previous step has succeeded (the default) or you can configure them to start at the same time (run in parallel). Similarly to [parallel targets](#parallel-targets), running too many steps in parallel can cause your Octopus Server to become the bottleneck and make your deployments take longer overall. ### Consider how much deployment work the Octopus Server is doing {#server-work} Some steps, like Azure deployments and AWS steps, [run on a worker](/docs/infrastructure/workers/#where-steps-run). By default, that's the [built-in worker](/docs/infrastructure/workers/#built-in-worker) in the Octopus Server. That means the step will invoke Calamari processes on the server machine to do the deployment work. That workload can be shifted off the server and onto [workers](/docs/infrastructure/workers). See this [blog post](https://octopus.com/blog/workers-performance) for a way to begin looking at workers for performance. # Project recommendations Source: https://octopus.com/docs/projects/recommendations.md We built Octopus Deploy with the core concept of consistency across all environments. The process used to deploy to your development environment is the same process used to deploy to your production environment. You can enable or disable specific steps, but it's the same process, and therefore the same parts, deployed to development or testing environments, that will make it to production. The production deployment will be a non-event because you've tested the process many times, once for each environment in the lifecycle before production. Knowing the underlying concept of Octopus Deploy is consistency; here are our project recommendations. ## Set up projects as single units of deployment A project usually represents a single unit of deployment, such as a component, a service, or a database. A unit of deployment is usually a single package that you will deploy independently. Heritage applications sometimes require multiple packages to be deployed at the same time, creating a larger and more interdependent unit of deployment. Octopus supports both these scenarios, though there are benefits to reducing the dependencies so you can perform smaller deployments. Aiming for small units of deployment reduces downtime, encourages the decoupling of components, and makes each deployment complete faster. It will also be easier to identify the source of any problems caused during a deployment if fewer components have changed. ## Deploy tightly coupled components together A component is considered tightly coupled when they depend on one another, and any changes made in one impact the others. For example, consider a web application with a React front-end, a Web API back-end, and PostgreSQL database. Those components are tightly coupled if adding a column to the database requires changing both the front-end and back-end. Not only that, the front-end and back-end will throw exceptions if the column isn't present in the database. Tightly coupled components must be deployed in a specific order. The general rule of thumb to follow is when components are stored in the same source control repo and are built using the same build definition, they should be deployed together. :::div{.hint} We previously recommended creating a project for each component. We have found in practice that while it solves a specific problem - you can have a faster deployment when only the front-end or back-end is changed to fix a bug, it generally leads to a higher maintenance overhead of orchestrating multiple projects. An orchestration project is typically created because the components must be deployed in a specific order. We now recommend a single project should be responsible for deploying all the tightly coupled components in an application. ::: Like any recommendation, we have seen the extreme end of the spectrum, projects with 200+ steps deploying 80+ packages that take over an hour to deploy. That might be a good candidate to split up into smaller projects. However, you should ensure components are decoupled before making changes to the deployment process. Don't change how you deploy the application when components need to be deployed in a specific order, and failure to do so will cause showstopping bugs. First, focus on decoupling the components, then change how you deploy them. ## Leverage the project per component pattern with decoupled components We recommend the project per component pattern when those components are decoupled from one another. Returning to the previous web application example, adding a column to the database can still require changing the back-end and front-end. However, the back-end and front-end have the appropriate code to continue processing without errors when the column is not present. And the column isn't required to be populated in the database. When components are decoupled from one another, they can have different deployment schedules, and do not have to be deployed in a specific order. That will negate the need for an orchestration project. :::div{.hint} In practice, it is rare to see the decoupling of all the components in a web application with a front-end, back-end, and database. It is much more common for functionality, or backend services, to be decoupled. ::: ## Use Lifecycles and Channels to reflect your branching strategies Most branching strategies follow the "main branch should always be ready to deploy to production" rule. No changes can be made directly to the main branch. Instead, work must be done in a branch and merged into main. Your lifecycles and channels should reflect that rule. For example, in your Octopus instance you have the following environments. - Dev - QA - Staging - Production For most branching strategies, we'd recommend two lifecycles in this example, each with two environments. - Development Lifecycle - Dev - QA - Release Lifecycle - Staging - Production The workflow would be as follows: 1. Create a branch, commit some changes. 2. Build is triggered on branch check-in. It creates a release in Octopus for the Development lifecycle and pushes to Dev. 3. Changes are verified in Dev and are promoted to QA. 4. Full test suite is run in QA. 5. Bugs or changes are found; repeat the previous steps. 6. After a few iterations, the change is ready for Production. 7. Create a pull request and merge into main. 8. Build is triggered on check-in to main. It creates a release in Octopus for the Release lifecycle and pushes to Staging. 9. Automated tests are run in Staging. 10. Assuming tests pass, promote to Production. If tests don't pass, then a new branch is created, and this process starts all over. Octopus Deploy provides the capability for dynamic package / Docker image selection. This allows you to have a different package per environment. The intended use case is when using a third-party external feed and the feed changes between environments. The external feed provides the capabilities to "promote" packages ready for deployment. We don't recommend having a single lifecycle with all environments. When that happens, we have seen customers create a single release and change the package or package version from QA to Staging. Such an approach is challenging to audit and track. Changes made on feature or short-lived branches are not ready for Production. They should be deployed to testing environments for verification and testing, but they should never have the chance to make it to Production. Merging into main should trigger a fresh build because you could be merging multiple changes from different branches for the first time. The underlying code has changed, and a new build is needed to test and verify. For the packages / Docker containers built from branches, append a pre-release tag to the release version. Leverage channel version rules to only allow packages / Docker containers with a pre-release tag for the Development lifecycle. At the same time, only allow packages / Docker containers **without** a pre-release tag for the release lifecycle. :::div{.hint} This section is another reason we recommend deploying all tightly coupled components stored in the same source control repository within the same project. Attempting to coordinate different lifecycles and releases across multiple projects can add additional overhead, which runs the risk of something needing to be fixed. ::: ## Include everything required to deploy Imagine you are working on a greenfield application for six months. It only exists in your development and testing environments, now it's time to deploy to staging. The web admins have set up a web server running IIS for you using a base image. The DBAs have created an account for the application to use. What about the configuration? What should the database name be? When you set up a project's processes, work under the assumption the base applications are present (.NET Framework, IIS, SQL Server, WildFly server, Oracle Database, etc.) With that in mind, also assume that those base applications have never been configured for your application. Assume SQL Server is there, but the database has never been created. Assume IIS is there, but the web application has never been configured. When it is time to deploy a project to an environment for the first time, you should only need to verify the servers are there and hit the deploy button. The project deployment process will take care of the rest. As a bonus, if a new server is added, you can deploy to that new server without worrying about the configuration. ## Take advantage of run conditions Almost everyone is familiar with environment [run conditions](/docs/projects/steps/conditions). For instance, run a step in production only. Alternatively, don't run this step in development or testing. However, there are other [run conditions](/docs/projects/steps/conditions/#run-condition): * Only running when the previous step was successful. * Only running on failure. * Always running. * Only run when the value of a variable equals true. Those additional conditions are advantageous. You can configure a step to send a slack notification when a failure occurs. You could set a manual intervention only to happen if you are deploying during business hours. You can also configure steps to run in parallel with one another. These conditions allow you to have a greater degree of control over your deployments. ## Automate every component's deployment {#automate-every-components-deployment} A typical scenario we see is that application deployments are automated, but the database deployment is manual. This means a DBA must run the scripts on the night of deployment to production. After they finish, the automated process can be kicked off. Because this is manual, there's a good chance one or more of the scripts were not included in the deployments to dev, testing or staging. Without prior testing, the likelihood of success decreases and the deployment time increases. Essentially, this great automated process takes a few minutes to finish, but it depends on a manual process that takes anywhere from ten minutes to an hour to complete. Every component of the application needs to be automated, even the database. Octopus Deploy integrates with many [database deployment](/docs/deployments/databases) tools to help with this sort of automation. ## Conclusion Just like setting up environments, projects form another critical element in Octopus Deploy. Getting them modeled right is very important in helping your Octopus Deploy instance scale. We've talked to customers who have projects with 200+ steps deploying 80+ packages, and each deployment takes well over an hour. That is very prone to error and doesn't scale all that well. Hopefully, with these suggestions, you can avoid a similar setup! # Conditions Source: https://octopus.com/docs/projects/steps/conditions.md For each [step](/docs/projects/steps/) that you define in your [deployment processes](/docs/projects/deployment-process), you can set conditions for greater control over the step's execution. You can set conditions to: - Run the step on specific environments or skip specific environments. - Specify which channels the step should run on. - Limit when the step runs based on the status of a previous step. - Run steps in parallel with a previous step. - Specify whether the step runs before or after package acquisition. - Make the step a required step that cannot be skipped. - Retry a step upon failure. - Cancel a step if specified duration elapses. :::figure ![Conditions](/docs/img/projects/steps/conditions/images/conditions.png) ::: Some of these options will only appear if they're available. For instance, the [channels](/docs/releases/channels) option is only visible if you have created one or more channels. ## Environments You can choose which [environments](/docs/infrastructure/environments) steps apply to: - Run for all applicable [lifecycle](/docs/releases/lifecycles) environments (default). - Run only for specific environments. - Skip specific environments. By default, steps will run on all environments specified in the lifecycle for the project, but you can choose environments to run on or skip. ## Channels If you have created one or more [channels](/docs/releases/channels), you can specify whether a step runs only when deploying a release through specific channels (e.g., a Script step that only runs for deployments through certain channels to configure extra telemetry). ## Run condition Run condition lets you specify that a step should run: - Only if the previous step succeeded. - Only if the previous step failed. - Always run. - When a variable expression evaluates to true. ### Variable expressions You can use the following expression to run a step only when the deployment is successful and when a variable evaluates to true: ``` #{unless Octopus.Deployment.Error}#{Variable}#{/unless} ``` You can achieve the opposite effect by swapping `unless` with `if`: ``` #{if Octopus.Deployment.Error}#{Variable}#{/if} ``` It's also possible to check the status of specific [steps and actions](/docs/projects/variables/system-variables/#tracking-deployment-status). ### Machine-level variable expressions You can use a step's machine-level output variables to achieve machine-level variable expressions. For example, assuming you're setting an output variable in a script step (Step01) that runs on several machines (Web01 and Web02) as follows: ``` # Step01 $machineToSucceed = "Web01" $shouldCurrentMachineSucceed = $OctopusParameters["Octopus.Machine.Name"] -eq $machineToSucceed Set-OctopusVariable -name "ShouldRun" -value "$shouldCurrentMachineSucceed" ``` That can be used in a variable expression on a subsequent step: ``` #{if Octopus.Action[Step01].Output[Web01].ShouldRun == "True"}True#{/if} ``` The currently-running machine could be substituted in this expression: ``` #{if Octopus.Action[Step01].Output[#{Octopus.Machine.Name}].ShouldRun == "True"}True#{/if} ``` This will evaluate to `True` on Web01 and `False` on Web02. Machine-level variable expressions are also supported in [rolling deployments](/docs/deployments/patterns/rolling-deployments-with-octopus) using child steps. ### Variable filters in run conditions It's possible to use [variable filters](/docs/projects/variables/variable-filters) to help create both complex run conditions and variable expressions, but there are limitations to be aware of. :::div{.warning} Using variable filters *inline* in the two [conditional statements](/docs/projects/variables/variable-substitutions/#conditionals) `if` and `unless` are **not supported**. ::: If you wanted to include a variable run condition to run a step *only* when the release had a prerelease tag matching `my-branch`, you might be tempted to use the `VersionPreReleasePrefix` [extraction filter](/docs/projects/variables/variable-filters/#extraction-filters) to write a condition like this: ``` #{if Octopus.Release.Number | VersionPreReleasePrefix == "my-branch"}true#{/if} ``` However, the evaluation of the statement would always return `False` as the syntax is not supported. Instead, you need to create a variable that includes the variable filter you want to use. For this example, let's assume it's named `PreReleaseBranch` with the value: ``` #{Octopus.Release.Number | VersionPreReleasePrefix} ``` Once you have created your variable, you can use it in your run condition like this: ``` #{if PreReleaseBranch == "my-branch"}True#{/if} ``` ## Start trigger If you have more than one step in your deployment process, the Start Trigger option lets you choose between: - Running steps in parallel. - Wait for the previous step to complete, then start. When you review a process with two steps that run in parallel, you'll notice two arrows linking the steps that run in parallel. ### Maximum parallelism There is no limitation on the number of steps that can run in parallel. The `Octopus.Action.MaxParallelism` variable allows for specifying the number of machines on which the action will concurrently execute which is the same variable used for [Rolling Deployments](https://octopus.com/docs/deployments/patterns/rolling-deployments-with-octopus). ### Steps in parallel on the same deployment target For safety reasons, by default, Octopus runs only one step at the same time on a single deployment target. If you want to run multiple steps on a deployment target in parallel, [you'll need to enable that behavior](/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously). ### Steps which depend on each other Watch out not to run steps that depend on each other in parallel. If **Step2** depends on the success of **Step1**, it might not be the best idea to run them in parallel, but one after the other only if **Step1** was successful. ### Other ways to improve deployment time We have written a comprehensive guide on [deployment performance](/docs/projects/deployment-process/performance) which covers many other aspects which affect your deployment time in addition to running steps in parallel. ## Package requirement The package requirement condition allows you to specify when package acquisition should occur. By default, a deployment will acquire packages immediately before the first step that uses a packages. This option can be used to explicitly indicate if a step should run before or after package acquisition. There are three options to choose from: - Let Octopus Decide (default): Packages may be acquired before or after this step runs. Octopus will determine the best time. - After package acquisition: Packages will be acquired before this step runs. - Before package acquisition: Packages will be acquired after this step runs. This option is hidden when it does not make sense, for example, when a script step is configured to run after a package step (packages must be acquired by this point). ## Required By default, deployment steps can be skipped when creating a deployment. Marking a step as Required prevents the step from being skipped. ## Retries and Execution Timeouts :::div{.warning} With the exception of the following steps: - `Deploy a Release` - `Health Check` - `Manual Intervention Required` - `Send an Email` This functionality is available on all other steps. ::: :::figure ![Retry-and-timeout](/docs/img/projects/steps/conditions/images/retry-timeout.png) ::: ### Retries Enabling Retries gives you the ability to automatically retry a step if it fails, with up to three attempts. This feature is particularly useful when dealing with steps that commonly fail due to temporary or transient errors during deployment. When a step fails the server will wait for a "backoff interval" to pass before trying again. By default, the backoff interval is 15 seconds, however this value can be adjusted by updating the value in the Backoff Interval field. This field accepts a wait time in seconds. You can also [recover from communication errors with a tentacle](/docs/infrastructure/deployment-targets/machine-policies/#recover-from-communication-errors) to reduce deployment failures that occur when Tentacle is on an unstable network connection. ### Timeouts When the configured Execution Timeout period has lapsed, the action will be cancelled. It’s important to note that Execution Timeouts encompass all processes involved in step execution. This includes connecting to a target, bootstrapper scripts, execution container start-up, and package cache clean-ups. We recommend setting a slightly longer timeout than expected, in most cases, an additional minute should account for this. We recommend using both features on steps that are likely to experience transient errors to increase your chances of a successful deployment. # IIS websites and application pools Source: https://octopus.com/docs/projects/steps/configuration-features/iis-website-and-application-pool.md The IIS Web Site and Application Pool feature is one of the [configuration features](/docs/projects/steps/configuration-features/) you can enable as you define the [steps](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process). The **IIS Web Site and Application Pool** feature is available on **deploy a package** steps, however, there is also a **Deploy to IIS** step which offers the same features and is designed specifically for IIS deployments. See [IIS Websites and Applications](/docs/deployments/windows/iis-websites-and-application-pools) for more details. ## Learn more - Generate an Octopus guide for [IIS and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=IIS). # System variables Source: https://octopus.com/docs/projects/variables/system-variables.md This page lists built-in [variables](/docs/projects/variables/) provided by Octopus that can be used in your deployment [custom scripts](/docs/deployments/custom-scripts). Most of the variables listed here are available in modern versions of Octopus and Calamari. However, some are only available from a specific version. See [Older versions](#older-versions) for more detail on when variables became available. :::div{.warning} **All variables are strings** Note that when evaluating values, **all Octopus variables are strings** even if they look like numbers or other data types. ::: ## Release {#release} Release-level variables are drawn from the project and release being created. `Octopus.Release.Id` The ID of the release. Example: *releases-123* `Octopus.Release.Number` The version number of the release. Example: *1.2.3* `Octopus.Release.Notes` Release notes associated with the release, in Markdown format. Example: *Fixes bugs 1, 2 & 3* `Octopus.Release.Created` The date and time at which the release was created. Example: *Tuesday 10th September 1:23 PM* `Octopus.Release.CustomFields[_name_]` The value of a custom field set on a release. Example: *4587* for a custom field `Pull Request Number` Example: *TST-123* for a custom field `Jira Ticket Number` ### Release package build information {#release-package-build-information} `Octopus.Release.Package` Packages, including changes, associated with the release. See below. This is a collection. `Octopus.Release.Builds` Build and version control details associated with the release. This is a collection. :::div{.hint} The `Octopus.Release.Package` and `Octopus.Release.Builds` variables: - will only be populated if [build information](/docs/packaging-applications/build-servers/build-information) has been pushed from the build server. - is only available to be used by the project [release notes](/docs/releases/release-notes), it is not accessible from the project deployment steps. ::: #### Octopus.Release.Package details The `Octopus.Release.Package` variable is a collection of `Package` objects based on the following structures: ```csharp public class Package { public string PackageId { get; set; } public string Version { get; set; } public WorkItemLink[] WorkItems { get; set; } public Commit[] Commits { get; set; } } public class WorkItemLink { public string Id { get; set; } public string LinkUrl { get; set; } public string Description { get; set; } } public class Commit { public string CommitId { get; set; } public string LinkUrl { get; set; } public string Comment { get; set; } } ``` The packages in a release are available as a collection which can be [iterated over](/docs/projects/variables/variable-substitutions/#repetition). e.g. ```text #{each package in Octopus.Release.Package} This release contains #{package.PackageId} #{package.Version} #{/each} ``` A particular package can be selected by indexing on the package ID: ```text #{Octopus.Release.Package[Acme.Web].Version} ``` The variables available for packages are: | Name | Example | | ----------- | --------------------------------- | | `PackageId` | `#{package.PackageId}` | | `Version` | `#{package.Version}` | | `Commits` | This is a collection. See below. | | `WorkItems` | This is a collection. See below. | On each package, the commits associated with that package are available as a collection which can be iterated over. e.g.: ```text #{each package in Octopus.Release.Package} #{each commit in package.Commits} - [#{commit.CommitId}](#{commit.LinkUrl}) - #{commit.Comment} #{/each} #{/each} ``` A particular commit can be selected by indexing on the commit ID (when using git the commit ID is the commit hash): ```text package.Commits[685afd4161d085e6e5f56a66e72e2298e402b114].Comment ``` The variables available for commits are: | Name | Example | | ---------- | -------------------- | | `CommitId` | `#{commit.CommitId}` | | `LinkUrl` | `#{commit.LinkUrl}` | | `Comment` | `#{commit.Comment}` | If the Octopus instance has one or more of the [Issue Tracker integrations](/docs/releases/issue-tracking) enabled, the commit messages will be parsed for issues. Any issues found will be displayed with the build information, and also available as variables: ```text #{each issue in package.WorkItems} - [#{issue.Id}](#{issue.LinkUrl}) #{/each} ``` A particular issue can be selected by indexing on the ID: ```text package.WorkItems[4465].LinkUrl ``` The variables available for issues are: | Name | Example | | --------- | ------------------ | | `Id` | `#{issue.Id}` | | `LinkUrl` | `#{issue.LinkUrl}` | There is also a distinct list of issues across all packages available in: ```text #{each workItem in Octopus.Release.WorkItems} - [#{workItem.Id}](#{workItem.LinkUrl}) - #{workItem.Description} #{/each} ``` #### Octopus.Release.Builds details The `Octopus.Release.Builds` variable is a collection of Build objects based on the following structures: ```csharp public class Build { public BuildPackage[] Packages { get; set; } public string BuildUrl { get; set; } public string Branch { get; set; } public string BuildEnvironment { get; set; } public string BuildNumber { get; set; } public string VcsRoot { get; set; } public string VcsType { get; set; } public string VcsCommitNumber { get; set; } public string VcsCommitUrl { get; set; } } public class BuildPackage { public string PackageId { get; set; } public string Version { get; set; } } ``` The builds in a release are available as a collection which can be [iterated over](/docs/projects/variables/variable-substitutions/#repetition). e.g. ```text #{each build in Octopus.Release.Builds} This release contains resources contributed by the build #{build.BuildUrl} #{/each} ``` Builds have a zero-based integer index, meaning the first build can be selected at index 0: ```text Octopus.Release.Builds[0].BuildUrl ``` The variables available for builds are: `Packages` A JSON array with the packages created by a build. Example: `#{build.Packages}` `BuildUrl` A link to the CI build. Example: `#{build.BuildUrl}` `Branch` The VCS branch associated with the build. Example: `#{build.Branch}` `BuildEnvironment` The CI server that executed the build. Example: `#{build.BuildEnvironment}` `BuildNumber` The build number associated with the build. Example: `#{build.BuildNumber}` `VcsRoot` A link to the VCS repository associated with the build. Example: `#{build.VcsRoot}` `VcsType` The type of VCS associated with the build (e.g. git). Example: `#{build.VcsType}` `VcsCommitNumber` The VCS commit ID associated with the build. Example: `#{build.VcsCommitNumber}` `VcsCommitUrl` A link to the commit associated with the build. Example: `#{build.VcsCommitUrl}` The variables available for build packages are: `PackageId` The ID of the package created by the build. Example: `#{build.Packages[0].PackageId}` `Version` The version of the package created by the build. Example: `#{build.Packages[0].Version}` ### Release Branch information {#release-branch-information} For projects that have [version control](/docs/projects/version-control) enabled, information about the branch and commit from which the release was created is also available. `Octopus.Release.Git.BranchName` The branch name. Example: *features/some-new-feature* `Octopus.Release.Git.CommitHash` The commit hash. Example: *0c708fdec272bc4446c6cabea4f0022c2b616eba* `Octopus.Release.Git.Ref` The git reference. Example: *Version 1* ## Deployment Deployment-level variables are drawn from the project and release being deployed. `Octopus.Acquire.MaxParallelism` This variable limits the maximum number of packages that can be concurrently deployed to multiple targets. Default: *10* Example: *2* `Octopus.Acquire.DeltaCompressionEnabled` Toggle whether delta compression is enabled when sending packages to targets. Example: *true* `Octopus.Deployment.Comments` User-provided comments on the deployment. Example: *Signed off by Alice* `Octopus.Deployment.Created` The date and time at which the deployment was created. Example: *Tuesday 10th September 1:23 PM* `Octopus.Deployment.CreatedBy.DisplayName` The full name of the user who initiated the deployment. Example: *Alice King* `Octopus.Deployment.CreatedBy.EmailAddress` The email address of the user who initiated the deployment | *[alice@example.com](mailto:alice@example.com)*| `Octopus.Deployment.CreatedBy.Id` The ID of the user who initiated the deployment. Example: *users-123* `Octopus.Deployment.CreatedBy.Username` The username of the user who initiated the deployment. Example: *alice* `Octopus.Deployment.Error` This variable outputs the error/exit code for a failed deployment. [See here](/docs/projects/variables/system-variables). Example: *Script returned exit code 123* `Octopus.Deployment.ErrorDetail` The error/exit code for the deployment along with the Octopus stack trace. [See here](/docs/projects/variables/system-variables) Example: *System.IO.FileNotFoundException: file C:\Missing.txt does not exist (at...)* `Octopus.Deployment.ForcePackageDownload` If true, the package will be freshly downloaded from the feed/repository regardless of whether it is already present on the endpoint *(Boolean)*. Example: *False* `Octopus.Deployment.Id` The ID of the deployment. Example: *deployments-123* `Octopus.Deployment.Name` The name of the deployment. Example: *Deploy to Production* `Octopus.Deployment.PreviousSuccessful.Id` The ID of the previous successful deployment of this project in the target environment. Example: *deployments-122* `Octopus.Deployment.Machines` Ids of machines being targeted by the deployment. Example: *machines-123,machines-124* `Octopus.Deployment.SpecificMachines` Specific machines being targeted by the deployment, if any *(List)*. Example: *machines-123,machines-124* `Octopus.Deployment.ExcludedMachines` Ids of machines that have been excluded from the deployment (generally for being unavailable). Example: machines-123,machines-124 `Octopus.Deployment.Tenant.Id` The ID of the Tenant being deployed for. If the deployment is untenanted then this variable will not be present. Example: *Tenants-123* `Octopus.Deployment.Tenant.Name` The name of the Tenant being deployed for. If the deployment is untenanted then this variable will not be present. Example: *Acme Corp* `Octopus.Deployment.Tenant.Tags` Comma delimited list of tags that belong the Tenant being deployed for. If the deployment is untenanted then this variable will not be present. Example: *Tenant type/External, Upgrade ring/Early adopter* `Octopus.Deployment.Trigger.Id` The ID of the Trigger that created the deployment. It is possible for a deployment to be triggered due to multiple triggers. In this case, the variable will contain the ID of *one* of the triggers. Example: *ProjectTriggers-522* `Octopus.Deployment.Trigger.Name` The name of the Trigger that created the deployment. It is possible for a deployment to be triggered due to multiple triggers. In this case, the variable will contain the name of *one* of the triggers. Example: *Nightly Deploy to Dev* `Octopus.Deployment.WorkerLeaseCap` This is an opt-in variable to help distribute multiple steps referencing the same package (including container) across a worker pool. By setting this, a worker will be reused for steps up to the cap, after which another worker will be selected and reused in the same way. If all workers have reached the cap, additional steps will be spread out evenly. By default, this behavior is disabled, and the same worker will be reused for all steps referencing the same package. Opt in by setting the variable to a number higher than 0. Example: `1` - achieves a similar effect to round-robin. Example: `5` - a balance between reducing package transfer and distributing load. Note: This value applies to both deployment processes and runbooks, as long as it's scoped to the particular scenario. `Octopus.Endpoint.\_type\_.\_property\_` Properties describing the endpoint being deployed. Example: *ftp.example.com* `Octopus.Environment.Id` The ID of the environment. Example: *environments-123* `Octopus.Environment.MachinesInRole[\_role\_]` Lists the machines with a specified target tag being deployed to. Example: *machines-123,machines-124* `Octopus.Environment.Name` The name of the environment. Example: *Production* `Octopus.Environment.SortOrder` The ordering applied to the environment when it is displayed on the dashboard and elsewhere. Example: *3* `Octopus.Machine.Id` The ID of the machine. Example: *machines-123* `Octopus.Machine.Name` The name that was used to register the machine in Octopus. Not the same as *Hostname*. Example: *WEBSVR01* `Octopus.Machine.Roles` The target tags associated with the machine *(List)*. Example: *web-server,frontend* `Octopus.Machine.Hostname` The host part of the URI that was used to register the machine, could be an IP, hostname depending on what was supplied. Only set for Listening Tentacles. Example: Database01, Database01.local, 192.168.200.100 `Octopus.Project.Id` The ID of the project. Example: *projects-123* `Octopus.Project.Name` The name of the project. Example: *OctoFx* `Octopus.ProjectGroup.Id` The ID of the project group. Example: *project-groups-123* `Octopus.ProjectGroup.Name` The name of the project group. Example: *Public Web Properties* `Octopus.Release.Channel.Name` The Channel name associated with the release. Example: *2.x Feature Branch* `Octopus.Release.Notes` Release notes associated with the release, in Markdown format. Example: *Fixes bugs 1, 2 & 3* `Octopus.Release.Number` The version number of the release. Example: *1.2.3* `Octopus.Release.Id` The ID of the release. Example: *releases-123* `Octopus.Release.Previous.Id` The ID of the last release of the project. Example: *releases-122* `Octopus.Release.Previous.Number` The version number of the last release of the project. Example: *1.2.2* `Octopus.Release.PreviousForEnvironment.Id` The ID of the last release of the project to the current environment. Example: *releases-122* `Octopus.Release.PreviousForEnvironment.Number` The version number of the last release of the project to the current environment. Example: *1.2.2* `Octopus.Release.CurrentForEnvironment.Id` The ID of the release of the last successful deployment to the current environment. Example: *releases-122* `Octopus.Release.CurrentForEnvironment.Number` The version number of the release the last successful deployment to the current environment. Example: *1.2.2* `Octopus.Release.Created` The date and time at which the release was created. Example: *Tuesday 10th September 1:23 PM* `Octopus.Space.Id` The ID of the Space. Example: *Spaces-1* `Octopus.Space.Name` The name of the Space . Example: *Dev Space* `Octopus.Task.Argument[_name_]` Argument values provided when creating the task. Example: *deployments-123* `Octopus.Task.Id` The ID of the task. Example: *server-tasks-123* `Octopus.Task.Name` The name of the task. Example: *Deploy release 1.2.3 to Production* `Octopus.Task.QueueTime` The date and time the task should be queued for execution. Example: *Tuesday 10th September 1:30 PM* `Octopus.Task.QueueTimeExpiry` The date and time before which the task must start. Example: *Tuesday 10th September 2:30 PM* `Octopus.Tentacle.CurrentDeployment.PackageFilePath` The path to the package file being deployed. Example: *C:\Octopus\Tentacle\Packages\OctoFx.1.2.3.nupkg* `Octopus.Tentacle.CurrentDeployment.TargetedRoles` The intersection of the target tags targeted by the step, and those associated with the machine. Example: *web-server* `Octopus.Tentacle.PreviousInstallation.CustomInstallationDirectory` The directory into which the previous version of the package was deployed. Example: *C:\InetPub\WWWRoot\OctoFx* `Octopus.Tentacle.PreviousInstallation.OriginalInstalledPath` The directory into which the previous version of the package was extracted. Example: *C:\Octopus\Tentacle\Apps\Production\OctoFx\1.2.2* `Octopus.Tentacle.PreviousInstallation.PackageFilePath` The path to the package file previously deployed. Example: *C:\Octopus\Tentacle\Packages\OctoFx.1.2.2.nupkg* `Octopus.Tentacle.PreviousInstallation.PackageVersion` The previous version of the package that was deployed to the Tentacle. Example: *1.2.3* `Octopus.Web.ProjectLink` A path relative to the Octopus Server URL at which the project can be viewed. Example: */app/projects/projects-123* `Octopus.Web.ReleaseLink` A path relative to the Octopus Server URL at which the release can be viewed. Example: */app/releases/releases-123* `Octopus.Web.DeploymentLink` A path relative to the Octopus Server URL at which the deployment can be viewed. Example: */app/deployment/deployments-123* ### Deployment changes {#deployment-changes} `Octopus.Deployment.Changes` A JSON array of `ReleaseChanges` objects. These can be iterated over and the properties accessed using regular Octopus variable expressions (see below). This will be JSON (see below). `Octopus.Deployment.WorkItems` The distinct list of issues across all [changes in the deployment](/docs/releases/deployment-changes/). This is a JSON array of `WorkItemLink` objects, defined below. This data will be only be available where [build information](/docs/packaging-applications/build-servers/build-information/) has been pushed and an [issue tracker integration](/docs/releases/issue-tracking) is enabled. This will be JSON (see below). `Octopus.Deployment.PackageBuildInformation` The distinct list of package [build information](/docs/packaging-applications/build-servers/build-information/) across all [changes in the deployment](/docs/releases/deployment-changes/). This is a JSON array of `ReleasePackageVersionBuildInformation` objects, defined below. This data will be only be available where [build information](/docs/packaging-applications/build-servers/build-information) has been pushed. This will be JSON (see below). The JSON structure contained in the `Octopus.Deployment.Changes` variables is an array of `ReleaseChanges` objects matching the following C# classes: ```csharp public class ReleaseChanges { public string Version { get; set; } public string ReleaseNotes { get; set; } public ReleasePackageVersionBuildInformation[] BuildInformation { get; set; } public WorkItemLink[] WorkItems { get; set; } public CommitDetails[] Commits { get; set; } } public class ReleasePackageVersionBuildInformation { public string PackageId { get; set; } public string Version { get; set; } public string BuildEnvironment { get; set; } public string BuildNumber { get; set; } public string BuildUrl { get; set; } public string Branch { get; set; } public string VcsType { get; set; } public string VcsRoot { get; set; } public string VcsCommitNumber { get; set; } public string VcsCommitUrl { get; set; } public WorkItemLink[] WorkItems { get; set; } public CommitDetails[] Commits { get; set; } } public class WorkItemLink { public string Id { get; set; } public string LinkUrl { get; set; } public string Source { get; set; } public string Description { get; set; } } public class CommitDetails { public string Id { get; set; } public string LinkUrl { get; set; } public string Comment { get; set; } } ``` There is an entry per release and it includes the release notes (**in markdown format**) and the build information for each of the packages in that release. **Example:** The following iterates the changes in the deployment, printing the release version and the issues contained. ```text #{each change in Octopus.Deployment.Changes} #{change.Version} #{each issue in change.WorkItems} #{issue.Id} - #{issue.LinkUrl} #{/each} #{/each} ``` ### Deployment changes templates {#deployment-changes-templates} `Octopus.Deployment.ChangesMarkdown` The output of applying the project's deployment changes template. This will be markdown. `Octopus.Deployment.Targets` The distinct targets being deployed to. This provides a dictionary of objects with ID and Name properties, keyed on ID. This is a distinct list across all steps in the deployment process. ## Action {#action} Action-level variables are available during execution of an action. Indexer notation such as `Octopus.Action[Website].TargetRoles` can be used to refer to values for different actions. `Octopus.Action.Container.Image` The name of the container image being deployed. Example: *OctoFx-RateService* `Octopus.Action.Id` The ID of the action. Example: *85287bef-fe6c-4eb7-beef-74f5e5a6b5b0* `Octopus.Action.IsSkipped` Whether or not the action has been skipped in the current deployment *(Boolean)*. Note: This value can be True or null (indicated by an empty string) Example: *True* `Octopus.Action.Manual.Instructions` The instructions provided for a manual step. Example: *Don't break anything* `Octopus.Action.Manual.ResponsibleTeamIds` The teams responsible for completing a manual step *(List)*. Example: *teams-123,teams-124* `Octopus.Action.MaxParallelism` The maximum number of deployment targets on which the action will concurrently execute, and the maximum number of steps which will run in parallel. This value can be set in a project variable to change the default for the project. Additionally, you can scope a value to specific actions to control concurrency across your deployment targets. This is the same variable which is set when configuring a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus). *(Number - Default: 10)*. **Note:** Some built-in steps have their own concurrent limit and will ignore this value if set. Example: *5* `Octopus.Action.Name` The name of the action. Example: *Website* `Octopus.Action.Number` The sequence number of the action in the deployment process *(Number)*. Example: *5* `Octopus.Action.Package.CustomInstallationDirectory` If set, a specific directory to which the package will be copied after extraction. Example: *C:\InetPub\WWWRoot\OctoFx* `Octopus.Action.Package.CustomInstallationDirectoryShouldBePurgedBeforeDeployment` If true, the all files in the `Octopus.Action.Package.CustomInstallationDirectory` will be deleted before deployment *(Boolean)*. Example: *False* `Octopus.Action.Package.DownloadOnTentacle` If true, the package will be downloaded by the Tentacle, rather than pushed by the Octopus Server *(Boolean)*. Example: *False* `Octopus.Action.Package.TreatConfigTransformationWarningsAsErrors` If true, any warnings in .NET configuration transformations will be treated as errors and will fail the deployment *(Boolean)*. Example: *True* `Octopus.Action.Package.IgnoreConfigTransformationErrors` If true, any errors in .NET configuration transformations will be treated as informational rather than errors that will fail the deployment *(Boolean)*. Example: *False* `Octopus.Action.Package.IgnoreVariableReplacementErrors` If true, any errors in variable replacement will be treated as a warning rather than an error that will fail the deployment. (*Boolean*). Example: *False* `Octopus.Action.Package.InstallationDirectoryPath` The directory where the package was installed. It is not available prior to package extraction. Example: *C:\InetPub\WWWRoot\OctoFx* `Octopus.Action.Package.FeedId` The ID of the feed from which the package being deployed was pulled. Example: *feeds-123* `Octopus.Action.Package.PackageId` The ID of the package being deployed. Example: *OctoFx.RateService* `Octopus.Action.Package.PackageVersion` The version of the package being deployed. Example: *1.2.3* `Octopus.Action.Package.SkipIfAlreadyInstalled` If true, and the version of the package being deployed is already present on the machine, its re-deployment will be skipped (use with caution) *(Boolean)*. Example: *False* `Octopus.Action.Script.ScriptBody` The script being run in a script step. Example: *Write-Host 'Hello!'* `Octopus.Action.Script.Syntax` The syntax of the script being run in a script step. Example: *PowerShell* `Octopus.Action.Script.CSharp.NuGetSource` Overrides the NuGet source used by the dotnet executor when running C# script steps. Example: ** `Octopus.Action.SkipRemainingConventions` If set by the user, completes processing of the action without running further conventions/scripts *(Boolean)*. This should be set as an [output variable](/docs/projects/variables/output-variables). e.g.
`Set-OctopusVariable -name 'Octopus.Action.SkipRemainingConventions' -value 'True'` Example: *True* `Octopus.Action.TargetRoles` Machine target tags targeted by the action *(List)*. Example: *web-server,frontend* `Octopus.Action.Template.Id` If the action is based on a step template, the ID of the template. Example: *action-templates-123* `Octopus.Action.Template.Version` If the action is based on a step template, the version of the template in use *(Number)*. Example: *123* `Octopus.Action.Status.Error` If the action failed because of an error, a description of the error. Example: *The server could not be contacted* `Octopus.Action.Status.ErrorDetail` If the action failed because of an error, a full description of the error. Example: *System.Net.SocketException: The server ...* `Octopus.Action.SubstituteInFiles.EnableNoMatchWarning` Controls whether a warning is displayed in the Task log when no files are found matching one or more of the glob patterns in Substitute Variables in Files. Example: *False* ### Reference package variables {#reference-package-variables} When [referencing packages](/docs/deployments/custom-scripts/run-a-script-step/#referencing-packages) in custom scripts, they can contribute variables that can be used just like any other variable. The variables are available **per package**. Assuming a referenced package named `Acme`: `Octopus.Action.Package[Acme].PackageId` The package ID. Example: *Acme* `Octopus.Action.Package[Acme].FeedId` The feed ID. Example: *feeds-123* `Octopus.Action.Package[Acme].PackageVersion` The version of the package included in the release. Example: *1.4.0* `Octopus.Action.Package[Acme].OriginalPath` The location of the zip file before any actions are taken. Example: *C:\Octopus\Packages\Spaces-1\feeds-builtin\Acme\Acme.1.4.0.zip* `Octopus.Action.Package[Acme].ExtractedPath` The absolute path to the extracted directory (if the package is configured to be extracted). Example: *C:\Octopus\Work\20210821060923-7117-31\Acme* `Octopus.Action.Package[Acme].PackageFilePath` The absolute path to the package file (if the package has been configured to not be extracted). Example: *C:\Octopus\Work\20210821060923-7117-31\Acme.zip* `Octopus.Action.Package[Acme].PackageFileName` The name of the package file (if the package has been configured to not be extracted). Example: *Acme.zip* #### Docker image package variables {#docker-image-package-variables} In a scenario where your package reference is a Docker image, some additional variables will be contributed. Assuming a package-reference named `Acme`: `Octopus.Action.Package[Acme].Image` The fully qualified image name. Example: *index.docker.io/Acme:1.4.0* `Octopus.Action.Package[Acme].Registry` The URI of the registry from the feed where the image was acquired from. Example: *index.docker.io* `Octopus.Action.Package[Acme].Version` The version of the image included in the release. Example: *1.4.0* `Octopus.Action.Package[Acme].Feed.UserName` The username from the feed where the image was acquired from (if the feed is configured to use credentials). Example: *Alice* `Octopus.Action.Package[Acme].Feed.Password` The password from the feed where the image was acquired from (if the feed is configured to use credentials). Example: *Password01!* ## Azure `Octopus.Action.Azure.CertificateThumbprint` The thumbprint of the X509 certificate used to authenticate with the Azure Subscription targeted by this action. Example: *86B5C8E5553981FED961769B2DA3028C619596AC* `Octopus.Action.Azure.PackageExtractionPath` If set by the user, the temporary path to extract Azure packages into during deployment. Example: Z:\Temp\packages\ `Octopus.Action.Azure.SubscriptionId` The Azure Subscription Id being targeted by this action. Example: *8affaa7d-3d74-427c-93c5-2d7f6a16e754* `Octopus.Action.Azure.ResourceGroupDeploymentName` Override the auto-generated resource group deployment name when deploying a resource group. Example: my-resource-group-deployment-name ## Azure Cloud Service `Octopus.Action.Azure.CloudServiceConfigurationFileRelativePath` If set by the user, the relative path to the \*.cscfg file, with a fallback to ServiceConfiguration.{Environment}.cscfg or ServiceConfiguration.Cloud.cscfg Example: *ServiceConfiguration.Custom.cscfg* `Octopus.Action.Azure.CloudServiceName` The name of the Cloud Service being targeted by this action. Example: *my-cloud-service-web* `Octopus.Action.Azure.CloudServicePackageExtractionDisabled` Octopus will not unpack the \*.cspkg file if this variable is set to True, instead the \*.cspkg file will be pushed to Azure as-is. Example: True `Octopus.Action.Azure.CloudServicePackagePath` The path of the \*.cspkg file used by this action. Example: *Z:\Temp\packages\my-cloud-service-web.cspkg* `Octopus.Action.Azure.LogExtractedCspkg` If true, the contents of the extracted \*.cspkg will be written to the log to help diagnose deployment issues *(Boolean)*. Example: *True* `Octopus.Action.Azure.Slot` The slot of the Cloud Service being targeted by this action. Example: *Staging* or *Production* `Octopus.Action.Azure.StorageAccountName` The name of the Azure Storage Account where \*.cspkg files will be uploaded to for deployment to the Cloud Service. Example: *my-storage-account* `Octopus.Action.Azure.SwapIfPossible` If true, the action will attempt to perform a VIP swap instead of deploying directly into the targeted Slot. Example: *True* `Octopus.Action.Azure.UploadedPackageUri` The Storage URI of the \*.cspkg file that will be deployed to the Cloud Service. Example: `https://my-storage-account/container/my-cloudservice.web.cspkg` `Octopus.Action.Azure.UseCurrentInstanceCount` If true, the action will maintain the number of Instances in the Cloud Service rather than reverting to what is defined in the \*.cspkg file. Example: *True* `Octopus.Action.Azure.DeploymentLabel` If set, the custom deployment label will be used for the Azure cloud service deployment. Example: my custom label for build 3.x.x ## Azure Web Apps `Octopus.Action.Azure.WebAppName` The name of the Web App being targeted by this deployment. Example: *my-web-app* `Octopus.Action.Azure.DeploymentSlot` The name of the Web App slot being targeted by this deployment. Example: *staging* `Octopus.Action.Azure.ResourceGroupName` The name of the resource group being targeted by this deployment. Example: MyResourceGroup `Octopus.Action.Azure.RemoveAdditionalFiles` When *True* instructs Web Deploy to delete files from the destination that aren't in the source package. Example: *True* `Octopus.Action.Azure.PreserveAppData` When *True* instructs Web Deploy to skip Delete operations in the **App\_Data** directory. Example: *True* `Octopus.Action.Azure.AppOffline` When *True* instructs Web Deploy to safely bring down the app domain by adding a blank **app\_offline.html** file in the site root. Example: *True* ## Output Output variables are collected during execution of a step and made available to subsequent steps using notation such as `Octopus.Action[Website].Output[WEBSVR01].Package.InstallationDirectoryPath` to refer to values based on the action and machine that produced them. See also [Output variables](/docs/projects/variables/output-variables). `Octopus.Action[_name_].Output.\_property\_` The results of calling `Set-OctopusVariable` during an action are exposed for use in other actions using this pattern. Example: *Octopus.Action[Website].Output.WarmUpResponseTime* `Octopus.Action[_name_].Output.Manual.Notes` Notes provided by the user who completed a manual step. Example: *Signed off by Alice* `Octopus.Action[_name_].Output.Package.InstallationDirectoryPath` The directory where the package was installed. Example: *C:\Octopus\Tentacle\Apps\Production\MyApp\1.2.3* `Octopus.Action[_name_].Output.Manual.ResponsibleUser.DisplayName` The full name of the user who completed the manual step. Example: *Alice King* `Octopus.Action[_name_].Output.Manual.ResponsibleUser.EmailAddress` The email address of the user who completed the manual step. Example: *[alice@example.com](mailto:alice@example.com)* `Octopus.Action[_name_].Output.Manual.ResponsibleUser.Id` The ID of the user who completed the manual step. Example: *users-123* `Octopus.Action[_name_].Output.Manual.ResponsibleUser.Username` The username of the user who completed the manual step. Example: *alice* `Octopus.Action[_name_].Output.OctopusAzureCloudServiceDeploymentID` The ID of the completed Azure Cloud Service deployment. Example: *c9f52da2b00a4313b3b64bb2ad0f409f* `Octopus.Action[_name_].Output.OctopusAzureCloudServiceDeploymentUrl` The Url of the completed Azure Cloud Service deployment. Example: `http://c9f52da2b00a4313b3b64bb2ad0f409f.cloudapp.net/` ## Step Step-level variables are available during execution of a step. Indexer notation such as `Octopus.Step[Website].Number` can be used to refer to values for different steps. `Octopus.Step.Id` The ID of the step. Example: *80b3ad09-eedf-40d6-9b66-cf97f5c0ffee* `Octopus.Step.Name` The name of the step. Example: *Website* `Octopus.Step.Number` The number of the step *(Number)*. Example: *2* `Octopus.Step.Status.Code` A code describing the current status of the step. Example: *Succeeded* `Octopus.Step.Status.Error` If the step failed because of an error, a description of the error. Example: *The server could not be contacted* `Octopus.Step.Status.ErrorDetail` If the step failed because of an error, a full description of the error. Example: *System.Net.SocketException: The server could not be contacted (at ...)* ## Agent Agent-level variables describe the deployment agent or Tentacle on which the deployment is executing. `Octopus.Tentacle.Agent.ApplicationDirectoryPath` The directory under which the agent installs packages. Example: *C:\Octopus\Tentacle\Apps* `Octopus.Tentacle.Agent.InstanceName` The instance name that the agent runs under. Example: *Tentacle* `Octopus.Tentacle.Agent.ProgramDirectoryPath` The directory containing the agent's own executables. Example: *C:\Program Files\Octopus Deploy\Tentacle* `Octopus.Agent.ProgramDirectoryPath` The directory containing either the server or Tentacle's executables depending on which the step being executed on. Example: *C:\Program Files\Octopus Deploy\Octopus* ## Worker Pool When a step is run on a worker, the following variables are available: `Octopus.WorkerPool.Id` The Id of the pool. Example: WorkerPools-1 `Octopus.WorkerPool.Name` The name of the pool. Example: Default Worker Pool ## Server Server-level variables describe the Octopus Server on which the deployment is running. `Octopus.Web.BaseUrl` The default URL at which the server API can be accessed. Note that this is based off the server's ListenPrefixes and works in simple configuration scenarios. If you have a load balancer or reverse proxy this value will likely not be suitable for use in referring to the server from a client perspective, e.g. in email templates etc. Example: *[https://my-octopus](https://my-octopus)* `Octopus.Web.ServerUri` The default URL at which the server portal can be accessed, as configured in the **Configuration ➜ Nodes** settings. *[https://my-octopus](https://my-octopus)* ## Tracking deployment status {#tracking-deployment-status} During deployment, Octopus provides variables describing the status of each step. Where `S` is the step name, Octopus will set: ```powershell Octopus.Step[S].Status.Code Octopus.Step[S].Status.Error Octopus.Step[S].Status.ErrorDetail ``` Status codes include `Pending`, `Skipped`, `Abandoned`, `canceled`, `Running`, `Succeeded` and `Failed`. For an action `A:` ```powershell Octopus.Action[A].IsSkipped ``` For the deployment as a whole: ```powershell Octopus.Deployment.Error Octopus.Deployment.ErrorDetail ``` :::div{.hint} **Error detail returned** Octopus.Deployment.Error and Octopus.Deployment.ErrorDetail will only display the exit code and Octopus stack trace for the error. As we cannot parse the deployment log, we can only extract the exit/error codes. It cannot show detailed information on what caused the error. For full information on what happened when the deployment fails, you will need to reference the logs. ::: ## Runbook `Octopus.Runbook.Id` The ID of the runbook. Example: *Runbooks-123* `Octopus.Runbook.Name` The name of the runbook. Example: *Restore Database* `Octopus.RunbookRun.Created` The date and time at which the runbook was run. Example: *Friday, March 13, 2020 6:23:38 AM* `Octopus.RunbookRun.CreatedUtc` The date and time at which the runbook was run in UTC format. Example: *3/13/20 6:23:38 AM +00:00* `Octopus.RunbookRun.Git.BranchName` The branch name if the runbook run was created from a branch Example: *branch-abc* `Octopus.RunbookRun.Git.CommitHash` The commit hash used when creating this run on a version controlled runbook Example: *14677f79e59df2a55e3904a7020fd14e96b8a1e9* `Octopus.RunbookRun.Git.Ref` The full git ref used when creating this run on a version controlled runbook Example: *refs/heads/branch-abc* `Octopus.RunbookRun.Git.TagName` The tag name if the runbook run was created for a tag "v1.0.234" `Octopus.RunbookRun.Id` The ID of the run. Example: *RunbookRuns-123* `Octopus.RunbookRun.Name` The name of the run. Example: *Run on Production* `Octopus.RunbookSnapshot.Id` The ID of the snapshot being run. Example: *RunbookSnapshots-123* `Octopus.RunbookSnapshot.Name` The name of the snapshot. Example: *Snapshot EXAMPLE3* `Octopus.RunbookSnapshot.Notes` Notes associated with the snapshot, in Markdown format. Example: *Restores the database* `Octopus.Web.RunbookSnapshotLink` A path relative to the Octopus Server URL at which the runbook snapshot can be viewed. Example: */app/snapshots/runbookSnapshots-123* `Octopus.Web.RunbookRunLink` A path relative to the Octopus Server URL at which the runbook run can be viewed. Example: */app/runs/runbookRuns-123* ## User-modifiable settings {#user-modifiable-settings} The following variables can be defined as variables in your project to modify the way Octopus behaves. `Octopus.Acquire.MaxParallelism` Maximum number of NuGet packages that should be downloaded at once when acquiring packages. Example: 3 `Octopus.Action.MaxParallelism` The maximum number of deployment targets on which the action will concurrently execute, and the maximum number of steps which will run in parallel. This value can be set in a project variable to change the default for the project. Additionally, you can scope a value to specific actions to control concurrency across your deployment targets. This is the same variable which is set when configuring a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus). *(Number - Default: 10)*. Example: *5* `OctopusPrintVariables` Set to "True" to tell Tentacle to print the value of all variables passed to it. We recommend only using this setting for non-production environments. Example: True `OctopusPrintEvaluatedVariables` Set to "True" to tell Tentacle to print the value of all variables passed to it after evaluating them. We recommend only using this setting for non-production environments. Example: True `OctopusSkipFreeDiskSpaceCheck` Set to "True" to skip the check for available free disk space when deploying packages. Example: True `OctopusFreeDiskSpaceOverrideInMegaBytes` The amount (in megabytes) of available free disk space we should check for (overriding the default 500MB), failing the deployment if not enough free disk space is available. Example: 100 `OctopusShouldFailDeploymentOnSubstitutionFails` If set to "True", the deployment will fail if any variable substitution fails. This variable was added in Octopus 2025.1.0. Example: True `Octopus.Action.PowerShell.CustomPowerShellVersion` If specified, Windows PowerShell scripts will be invoked using `PowerShell.exe -version {Version}` where {Version} is the value you specified. Accepted values are *2.0*, *3.0*, *4.0, 5.0*.
PowerShell Core scripts will be invoked using the installed version of PowerShell Core which matches the specified value. The value must match one of the directories contained within `%PROGRAMFILES%\PowerShell`. Example values include *6* and *7-preview*. Example: 2.0 `OctopusDeleteScriptsOnCleanup` For packaged scripts, set to "False" to keep the PreDeploy/Deploy/PostDeploy scripts in the target directory (i.e. don't cleanup). Example: False `Octopus.Action.Script.SuppressEnvironmentLogging` To suppress/disable the environment logging that occurs from script (eg. PowerShell or Bash Script Environment Variables logging). This only suppresses script logging and does not suppress the Octopus or Calamari environment logging. Example: True `Octopus.Action.PowerShell.ExecuteWithoutProfile` Set to `true` to not run the Tentacle service account's PowerShell profile script when running PowerShell script steps. Example: True `OctopusSuppressDuplicateVariableWarning` Set to `true` to have the duplicate variable message logged as verbose instead of warning. **Do this if you are aware of the duplication and that it isn't causing any issues in your deployment**. Example: True `Octopus.Action.Package.RunScripts` Set to `false` to prevent scripts inside packages from executing. Example: True `Octopus.Calamari.CopyWorkingDirectoryIncludingKeyTo` Set to a file-path and the Calamari working directory will be copied to the configured location. **Copied files include the one-time key to decrypt sensitive variables** [More details.](/docs/support/copy-working-directory). Example: `c:\temp\octopus-debug` `Octopus.Deployment.WorkerLeaseCap` This is an opt-in variable to help distribute multiple steps referencing the same package (including container) across a worker pool. By setting this, a worker will be reused for steps up to the cap, after which another worker will be selected and reused in the same way. If all workers have reached the cap, additional steps will be spread out evenly. By default, this behavior is disabled, and the same worker will be reused for all steps referencing the same package. Opt in by setting the variable to a number higher than 0. Example: `1` - achieves a similar effect to round-robin. Example: `5` - a balance between reducing package transfer and distributing load. Note: This value applies to both deployment processes and runbooks, as long as it's scoped to the particular scenario. `Octopus.Task.ConcurrencyTag` Octopus will run one task at a time for a given concurrency tag. Set the variable to run tasks in parallel instead of serial or in serial instead of parallel. For example, tenanted deployments run in parallel by default. Removing tenants from the concurrency tag will run them serially: #{Octopus.Project.Id}/#{Octopus.Environment.Id} Example: #{Octopus.Deployment.Tenant.Id}/#{Octopus.Project.Id}/#{Octopus.Environment.Id} ### Kubernetes `Octopus.Action.Kubernetes.LogCliOutputAsInfo` By default, successful output from Kubernetes CLI tools (`kubectl`, `helm`, `aws`, `az`, `gcloud`, etc.) is logged at the Verbose level, which is only visible when the task log level is set to Verbose. Set to `True` to promote this output to the Info level so it appears in the Standard task log. This is useful when debugging deployments to see the full output of these tools without needing to switch the log level to Verbose for the entire deployment. Example: True ## Older versions {#older-versions} - `Octopus.Release.Git.BranchName`, `Octopus.Release.Git.CommitHash` and `Octopus.Release.Git.Ref` is available from Octopus Deploy **2021.3** onwards. - `Octopus.Web.ServerUri` is available from Octopus Deploy **2019.4.0** onwards. - `Octopus.Deployment.Tenant.Id`, `Octopus.Deployment.Tenant.Name` and `Octopus.Deployment.Tenant.Tags` is available from Octopus Deploy **3.4** onwards. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # Configuration as Code reference Source: https://octopus.com/docs/projects/version-control/config-as-code-reference.md The configuration as code feature enables you to save some project-level settings as files in a Git repository instead of SQL Server. The files are written in the OCL (Octopus Configuration Language) format. Storing resources as files lets you leverage version control features such as branching, pull requests, and reverting changes. In addition, you can save both your source code and how you deploy your code in the same Git repository. This page is a reference document of how the config-as-code feature works. ## Octopus project level only The config-as-code feature will store Octopus Project resources in Git instead of SQL Server. ### Project resources version controlled Currently, the Project level resources saved to Git are: - Deployment Process - Deployment Settings - Release Versioning - Release Notes Template - Deployment Targets Required - Transient Deployment Targets - Deployment Changes Template - Default Failure Mode - Runbook Process - Runbook Settings - Multi-tenancy Mode - Run Retention Policy - Connectivity Policy - Default Failure Mode - Variables (excluding Sensitive variables) :::div{.hint} Sensitive variables are still stored in the database. Regardless of the current branch, you will always see the same set of sensitive variables. ::: ### Project resources saved to SQL Server Currently, the Project level resources saved to SQL Server when version control is enabled are: - Channels - Triggers - Releases - Deployments - Sensitive Variables - General Settings - Project Name - Enabled / Disabled - Logo - Description - Project Group :::div{.hint} Sensitive Variables are planned for future releases of config-as-code. ::: ### Resources **not** version controlled by config-as-code The config-as-code feature manages project-level resources. However, it is worth explicitly mentioning some things that are **not included**: - Infrastructure - Environments - Deployment Targets - Workers - Worker Pools - Machine Policies - Machine Proxies - Accounts - Tenants - Library - Certificates - External Feeds - Lifecycles - Packages - Build Information - Script Modules - Step Templates - Variable Sets - Server Configuration - Feature Flags - License - Node Settings (Task Cap and Server Uri) - Issue Tracker Settings - External Auth Provider Settings - SMTP - Spaces - Teams (both membership and role assignment) - Users - User Roles Currently, there are no plans to include these resources in the config-as-code feature. Several of the resources above can be put into version control using the [Octopus Terraform Provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest/docs). :::div{.hint} Resources managed by the Octopus Terraform Provider will have their state managed by Terraform. Resources managed by the Octopus config-as-code feature will have the state managed by Octopus Deploy. The two are not the same and shouldn't be treated as such. ::: ## Git configuration options Project version control settings can be accessed by clicking on the **Settings ➜ Version Control** link on the project navigation menu. ### Git repository The *Git Repository* field should contain the URL for the repository you wish the Octopus configuration to be persisted to. e.g. `https://github.com/OctopusSamples/OctoFX.git` :::div{.hint} Different VCS providers require different URL formats for the Git repository, some (e.g. GitLab) require the URL include `.git` at the end while others (e.g. Azure DevOps) does not support this. GitHub supports either format. The best option to get the correct URL is to go to the repository in your provider and copy the URL used for cloning the repository from there. ::: The repository must be initialized (i.e. contain at least one branch) prior to saving. Octopus will convert the existing items in the project to OCL (Octopus Configuration Language) and save it to that repository when you click save. If the repository isn't initialized, that will fail. ### Authentication The config-as-code feature is designed to work with *any* Git repository. When configuring a project to be version-controlled, you can optionally provide credentials for authentication. :::div{.hint} Do not use credentials from a personal account. Select a shared or service account. When Octopus Deploy saves to your Git repo, you will typically see the message `[User Name] authored and [Service Account] committed on [Date].` ::: For the Password field, we recommend using a personal access token. We also recommend that you follow the principle of least privilege when selecting scopes or permissions to grant this personal access token. Git providers allow you to create an access token in different ways. The recommended *scope* for each provider is listed in brackets. - [GitHub - Creating a fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#creating-a-fine-grained-personal-access-token); (Permission - `Contents - Read and Write`) - [GitHub - Creating a personal access token (Classic)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#creating-a-personal-access-token-classic); (Scope - `repo`) - [Azure DevOps](https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate); (Scope - `vso.code_full`) - [GitLab](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html); (Scope - `write_repository`) - [BitBucket Server](https://confluence.atlassian.com/bitbucketserver063/personal-access-tokens-972354166.html); (Permission - `Project admin`) - [BitBucket Cloud - Use App Passwords](https://support.atlassian.com/bitbucket-cloud/docs/app-passwords/); (Permission - `Repositories - Read & Write`) :::div{.hint} Some VCS providers require that you use only a username and personal access token for authentication, not an email address (i.e. BitBucket). ::: #### BitBucket repository access tokens BitBucket's repository access tokens allow you to create repository-specific access tokens. For these to work with your Git repositories in Octopus, you must set the username to `x-token-auth`, and the password to the token. :::figure ![Screenshot of Octopus Version Control Settings page with Authentication section expanded. Username/password auth method is selected, the Username input field is highlighted with a bold red box, and contains the value x-token-auth](/docs/img/projects/version-control/octopus-bitbucket-repository-access-tokens.png) ::: ### File storage *Git File Storage Directory* specifies the path within the repository where the Octopus configuration will be stored. The default directory is `.octopus`, but that can be changed. If only a single Octopus project will be stored in the repo, we recommend putting the configuration directly under the `.octopus` directory. :::div{.hint} If multiple projects will be persisted to the repository, adding the project name to the path is the recommended convention, e.g. `./octopus/acme` ::: We recommend storing projects alongside the application code. While it is possible to store all your deployment projects in a single central repository with folders for each project, it will be challenging to manage as you add more projects. For example, if you have multiple component projects, one for Web UI, another for Web API, etc., but the source code is in one repository, then store all the component projects in that repository. If you move the application code later, you can also [move the deployment configuration](/docs/projects/version-control/moving-version-control) to keep it with the application. ### Branch settings #### Default branch name The *Default Branch Name* is the branch on which the Octopus configuration will be written. It is also the default branch that will be used in various situations, for example: - When users view the project's deployment process for the first time in the Octopus UI, this is the initially selected branch - When creating releases, this will be the default branch selected - When running Runbooks, variable values will be pulled from this branch For existing initialized repositories, the default branch must exist. If the repository is new and uninitialized, Octopus will create the default branch automatically. :::div{.hint} When snapshotting a Runbook in a Git project that is not yet using config-as-code Runbooks, the variables will always be taken from the default branch. ::: #### Initial commit branch If the default branch is protected in your repository, select the *Is the default branch protected?* checkbox. This will allow you to use a different *Initial Commit Branch*. If this branch does not exist, Octopus will create the branch automatically. The Octopus configurations will be written to the initial commit branch instead of the default branch. You will need to merge the changes from this branch into the default branch outside of Octopus. #### Protected branches pattern You can also nominate protected branches for your Project. This will prevent users from making direct commits to the nominated branches from the Octopus UI and encourage them to create a new branch instead. To nominate protected branches, type in the name or a wildcard pattern in the Protected Branches Pattern field under Branch Settings. This will apply to all existing and future branches. ## Git repository cache Octopus Server caches Git repositories locally in the `Git` subdirectory of the server home directory. This cache has a default size limit of **20 GB**. When exceeded, Git operations such as creating releases, deploying, or converting projects to version control will fail with: > Unable to perform this operation as the local git cache is using more space than is available as set by the configured limit of 20 GB. Please use a smaller repository or remove any not in use. To change the limit, set the `OCTOPUS__Git__CacheFullThresholdMb` system environment variable to the desired value in **megabytes** (e.g. `30720` for 30 GB), then **restart the Octopus Server service**. Setting the value to `0` disables the limit. ## OCL files After successfully configuring a project to be version controlled, the specified Git repository will be populated with a set of Octopus Configuration Language (OCL) files. These files are created in the directory you define during setup. E.g. `./octopus/acme` Currently, Octopus creates the following files and folders: - deployment_process.ocl - deployment_settings.ocl - variables.ocl - schema_version.ocl - runbooks/ The runbooks/ directory will contain runbook-name.ocl files for any published runbooks. The *deployment_process.ocl* file contains the configuration for your project's steps. Below is an example *deployment_process.ocl* for a project containing a single *Deploy a Package* step. ```hcl step "deploy-a-package" { name = "Deploy a Package" properties = { Octopus.Action.TargetRoles = "web" } action { action_type = "Octopus.TentaclePackage" properties = { Octopus.Action.EnabledFeatures = ",Octopus.Features.ConfigurationTransforms,Octopus.Features.ConfigurationVariables" Octopus.Action.Package.AdditionalXmlConfigurationTransforms = "Web.Release.config => Web.config" Octopus.Action.Package.AutomaticallyRunConfigurationTransformationFiles = "True" Octopus.Action.Package.AutomaticallyUpdateAppSettingsAndConnectionStrings = "True" Octopus.Action.Package.DownloadOnTentacle = "False" Octopus.Action.Package.FeedId = "octopus-server-built-in" Octopus.Action.Package.PackageId = "webConfig" } worker_pool_variable = "" packages { acquisition_location = "Server" feed = "octopus-server-built-in" package_id = "webConfig" properties = { SelectionMode = "immediate" } } } } ``` The *deployment_settings.ocl* file contains the configuration for the deployment settings associated with the deployment process. If using the default deployment process settings, the .ocl will be mostly empty. ```hcl connectivity_policy { } versioning_strategy { } ``` However, configuring the settings via Octopus will populate the fields with their properties and values. ```hcl default_guided_failure_mode = "On" deployment_changes_template = <<-EOT #{each release in Octopus.Deployment.Changes} **Release #{release.Version}** #{release.ReleaseNotes} #{each workItem in release.WorkItems} - [#{workItem.Id}](#{workItem.LinkUrl}) - #{workItem.Description} #{/each} #{/each} EOT release_notes_template = <<-EOT #{each workItem in Octopus.Release.WorkItems} - [#{workItem.Id}](#{workItem.LinkUrl}) - #{workItem.Description} #{/each} EOT connectivity_policy { allow_deployments_to_no_targets = true exclude_unhealthy_targets = true skip_machine_behavior = "SkipUnavailableMachines" target_roles = ["Web"] } versioning_strategy { donor_package { step = "deploy-a-package" } } ``` The *variables.ocl* file contains all non-sensitive variables for the project. ```hcl variable "DatabaseName" { value "AU-BNE-TST-001" { environment = ["test"] } value "AU-BNE-DEV-001" { environment = ["development"] } value "AU-BNE-001" { environment = ["production"] } } variable "DeploymentPool" { type = "WorkerPool" value "non-production-pool" {} value "production-pool" { environment = ["production"] } } ``` :::div{.hint} In Git projects, [Octopus will continue apply variable permissions based on scopes](/docs/security/users-and-teams/security-and-un-scoped-variables) when interacting through the API and Portal. As these variables are written to a single text file, any user with access to the repository will have full access to all variables (regardless of scoping). ::: ## Slugs in OCL The following resources will be referenced via their slug: - Account - Channel - Deployment Action - Deployment Step - Deployment Target - Environment - Feed - Lifecycle - Team - Worker Pool All other resources will be referenced from OCL via their ID. We plan on growing this list to include more resources in the future as we introduce slugs into more places throughout Octopus. ## Items of note When designing the config-as-code feature, we made several decisions to keep an appropriate balance of usability and functionality. There are a few limitations and items of note you should be aware of with config-as-code. - The Octopus Terraform Provider and OCL are not a 1:1 match. You cannot copy resources between the two and expect everything to work. We want to narrow the gap as much as possible, but as of right now, a gap exists. - Octopus currently only supports connecting to Git repositories over HTTPS and not SSH. - Shared resources (environments, external feeds, channels, etc.) are referenced by their slug from OCL. The API however will still use IDs. - Shared resources referenced in OCL that no longer exist in Octopus Server will result in an error when loading through the portal or API. The provided error message should provide information indicating what reference is no longer valid and should be updated or removed before being loaded again. - Shared resources must exist before loading an OCL file into Octopus Deploy. What that means is if you copy the OCL files from one Git repo to another, and point a new project at those files, then any shared resource must exist before creating that project. That only applies when projects are in different spaces or on different instances. If the resources do not exist, an error message will appear. - Pointing multiple projects to the same folder in the same Git repo is unsupported. Please see our [unsupported config as code scenarios](/docs/projects/version-control/unsupported-config-as-code-scenarios) for more information. - Converting a project to be version-controlled is a one-way process. At this time, you cannot convert back. ## Older versions - Prior to version 2022.3.4517, Git projects would reference shared resources using their name in OCL. This had a side-effect causing API responses for Git projects to contain names instead of IDs. From version 2022.3.4517 onwards, a handful of resources are referenced from OCL by their slug. IDs will be used in API responses instead of names. # Jira issue tracking Source: https://octopus.com/docs/releases/issue-tracking/jira.md Octopus integrates with Jira issues. The integration includes the ability to: - Automatically add links to Jira issues in your Octopus releases and deployments. - View release and deployment details from Jira issues (Jira Cloud only). ## How Jira integration works {#how-jira-integration-works} :::figure ![Octopus Jira integration - how it works diagram](/docs/img/releases/issue-tracking/images/octo-jira-how-it-works.png) ::: 1. When you commit code, add a commit message containing one or more [Jira issue references](https://confluence.atlassian.com/adminjiracloud/integrating-with-development-tools-776636216.html). 2. The Octopus Deploy [plugin](/docs/packaging-applications/build-servers) for your build server [pushes the commits to Octopus](/docs/packaging-applications/build-servers/build-information/#passing-build-information-to-octopus). These are associated with a package ID and version (The package can be in the built-in Octopus repository or an external repository). 3. The Jira issue-tracker extension in Octopus parses the commit messages and recognizes the issue references. 4. When creating the release which contains the package version, the issues are associated with the release. These are available for use in [release notes](/docs/packaging-applications/build-servers/build-information/#build-info-in-release-notes), and will be visible on [deployments](/docs/releases/deployment-changes). 5. As the release is deployed to each environment, Octopus notifies Jira to update the issue. :::div{.hint} From 2025.3 the Jira issue-tracker extension in Octopus will parse both commit messages and branch names for Jira issue references. ::: :::figure ![Octopus release with Jira issues](/docs/img/releases/issue-tracking/images/octo-jira-release-details.png) ::: ![Octopus deployment with generated release notes](/docs/img/releases/issue-tracking/images/octo-jira-release-notes.png) ### Availability {#availability} The ability to push the build information to Octopus, which is required for Jira integration, is currently only available in the official Octopus plugins: - [JetBrains TeamCity](https://plugins.jetbrains.com/plugin/9038-octopus-deploy-integration) - [Atlassian Bamboo](https://marketplace.atlassian.com/apps/1217235/octopus-deploy-bamboo-add-on?hosting=server&tab=overview) - [Azure DevOps](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) - [Jenkins Octopus Deploy Plugin](https://plugins.jenkins.io/octopusdeploy/) - [GitHub Actions](https://github.com/marketplace/actions/push-build-information-to-octopus-deploy) ### Jira Cloud only The ability to update Jira issues with deployment information (i.e. step 5 above) is only available for Jira Cloud. This is a Jira limitation; the [deployment module](https://developer.atlassian.com/cloud/jira/software/modules/deployment/) is not available for Jira Server. ## Configuring Jira integration The following steps explain how to integrate Octopus with Jira. 1. [Configure your build server to push build information to Octopus.](#configure-your-build-server) This is required to allow Octopus to know which issues are associated with a release. 2. [Configure the Jira connection in Octopus Deploy.](#connect-octopus-to-jira) ## Configure your build server to push build information to Octopus {#configure-your-build-server} To integrate with Jira issues, Octopus needs to understand which issues are associated with a [release](/docs/releases). Octopus does this by inspecting commit messages and branch names associated with any packages contained in the release. To supply the commit messages: 1. Install one of our official [build server plugins](#availability) with support for our build information step. 2. Update your build process to add and configure the [Octopus Build Information step](/docs/packaging-applications/build-servers/build-information/#build-information-step). ## Connect Octopus to Jira {#connect-octopus-to-jira} This section describes how to configure Octopus Deploy to connect to Jira. Any Octopus instance, self-hosted or cloud-hosted, can be configured to use the Jira integration. The only network connectivity requirements are that your Octopus Server and your browser can connect to the Jira instance. Jira will never actively attempt to connect to Octopus. The process is slightly different depending on whether you are connecting to [Jira Cloud](#connecting-jira-cloud-and-octopus) or [Jira Server](#connecting-jira-server-and-octopus). ### Connecting Jira Cloud and Octopus Deploy {#connecting-jira-cloud-and-octopus} If you are using Jira Cloud, you can use the Octopus Deploy plugin for Jira, available from the [Atlassian Marketplace](https://marketplace.atlassian.com/apps/1220376/octopus-deploy-for-jira), to enable teams to view release and deployment details from Octopus directly in Jira issues. This section and the following steps describe how to configure the plugin. This process is for Jira Cloud, if you are using Jira Server, see [Connecting Jira Server and Octopus Deploy](#connecting-jira-server-and-octopus). :::figure ![Jira Issue with deployments](/docs/img/releases/issue-tracking/images/jira-issue-with-deployments.png) ::: 1. Install the Octopus Deploy plugin in your Jira Cloud instance. From the Atlassian Marketplace, add the [Octopus Deploy for Jira](https://marketplace.atlassian.com/apps/1220376/octopus-deploy-for-jira) app and click 'Get Started' to configure it. Alternately, the app is also available in Jira by navigating to **Jira Settings ➜ Find new apps**. :::div{.warning} **Safari and third-party cookies in an `iframe`** Please note: The Octopus Deploy for Jira plugin uses cookies, and Safari, by default, discards cookies set in an `iframe` unless the host that's serving the `iframe` has set a cookie before, outside the `iframe`. In our case, this does not happen. As a result, attempting to configure our plugin using Safari will fail with a blank screen and a HTTP 400 error in the network tab. Due to this limitation, we recommend using a different browser that allows third-party cookies in an `iframe`. ::: Note: Keep this configuration page open while you complete the next steps as you need to copy values between Octopus and Jira. 2. Configure the Jira extension in Octopus Deploy. In the Octopus Web Portal, navigate to **Configuration ➜ Settings ➜ Jira Integration** and copy the following values from the Jira App configuration page: - **Jira Base URL**. This tells Octopus where your Jira instance is located and enables Octopus to render the links back to Jira issues. i.e., `https://your-jira-instance.atlassian.net`. - **Jira Connect App Password**. Ensure the **Is Enabled** property is set. 3. In Octopus Deploy Configure the Release Note Options. - **Jira username/password**: Set these values to allow Octopus to connect to Jira and retrieve the Jira issue (work item) title when viewing packages or creating releases. If these are not provided, work items will not be displayed when viewing packages or creating releases. :::div{.warning} **Jira Cloud only supports API tokens** Please note: Jira Cloud only supports an **API Token** for authentication. An API token should be entered, rather than an actual password. You can create one from an Atlassian account in the **Security** area. ::: :::div{.warning} **Scoped (granular) API tokens are not supported** Octopus only supports classic (unscoped) Jira API tokens. Jira Cloud scoped API tokens use a different API base URL (`https://api.atlassian.com/ex/jira//rest/api`) and authentication method that Octopus does not currently handle. When creating your API token, ensure you create a classic API token without granular scopes. ::: - **Release Note Prefix *(optional)***: If specified, Octopus will look for a comment that starts with the given prefix text and use whatever text appears after the prefix as the release note, which will be available in the [build information](/docs/packaging-applications/build-servers/build-information) as the issue's description. If no comment is found with the prefix then Octopus will default back to using the title for that issue. For example, a prefix of `Release note:` can be used to identify a customer friendly issue title vs a technical feature or bug fix title. 4. Ensure the Octopus Server URL is set. If you are using Octopus Cloud, this value is automatically set for you. If you are not using Octopus Cloud, navigate to the **Configuration ➜ Nodes** page and ensure you have set the Server URI field to your Octopus Server's base URL. i.e., `https://my-company.octopus.app` or `https://my-company-internal-name` Note: Octopus passes this value to Jira so it can build hyperlinks back to the deployments from the Jira UI. It never actually tries to connect to this URL itself. 5. Configure the Octopus plugin in Jira. Navigate to the **Configuration ➜ Settings ➜ Jira Integration** page in Octopus, copy the **Octopus InstallationID**, and add it to Jira App configuration. 6. In Octopus Deploy update your environment settings. Navigate to **Infrastructure ➜ Environments** to map your Octopus environments to Jira environment types. This is required so Jira can understand Octopus environments and track issue progress. Note: Jira environment types are a fixed list that cannot be edited. When configured, this integration will provide Jira with updates about the progress of Jira issues (work items) through the pipeline. ### Connecting Jira Server and Octopus Deploy {#connecting-jira-server-and-octopus} This process is for Jira Server, if you are using Jira Cloud, see [Connecting Jira Cloud and Octopus Deploy](#connecting-jira-cloud-and-octopus). 1. Configure the Jira extension in Octopus Deploy. In the Octopus Web Portal, navigate to **Configuration ➜ Settings ➜ Jira Integration** and enter the following values for your Jira instance: - **Jira Base URL**. This tells Octopus where your Jira instance is located and enables Octopus to render the links back to Jira issues. i.e., `https://your-internal-jira-instance/` Ensure the **Is Enabled** property is set. 2. In Octopus Deploy Configure the Release Note Options. - **Jira username/password**: Set these values to allow Octopus to connect to Jira and retrieve the Jira issue (work item) title when viewing packages or creating releases. Note that if these credentials are not provided, work items will not be displayed when viewing packages or creating releases. :::div{.warning} **Jira Server does not support API tokens** Please note: Jira Server does not support API tokens, so a username and password must be entered. ::: - **Release Note Prefix *(optional)***: If specified, Octopus will look for a comment that starts with the given prefix text and use whatever text appears after the prefix as the release note, which will be available in the [build information](/docs/packaging-applications/build-servers/build-information) as the issue's description. If no comment is found with the prefix then Octopus will default back to using the title for that issue. For example, a prefix of `Release note:` can be used to identify a customer friendly issue title vs a technical feature or bug fix title. When configured, this integration will retrieve Jira issue details and add details to your releases and deployments and generate release notes automatically. ### Test the integration You can verify a connection can be made successfully between the Octopus Server and your Jira Cloud/Server instance. The **Connect App** `Test` button (found under `Jira Connect App Password`) checks the connectivity for pushing deployment data to your Jira Cloud instance. ![Connect App Test button](/docs/img/releases/issue-tracking/images/jiraconnectapp_testbutton.png) :::div{.hint} For this connectivity test to succeed the Octopus Server must be able to connect to both your Jira Cloud instance's URL, and to [https://jiraconnectapp.octopus.com](https://jiraconnectapp.octopus.com), which hosts our Jira plugin. ::: The **Release Notes** `Test` button (found under `Jira Password`) checks the connectivity to your Jira Cloud/Server instance for retrieving work item information. ![Release Notes Test button](/docs/img/releases/issue-tracking/images/jirareleasenotes_testbutton.png) :::div{.hint} For this connectivity test to succeed the Octopus Server must be able to connect to your Jira Cloud/Server instance's URL. ::: ### Deployments When the Jira Integration is enabled and configured with Connect App settings, you will see blocks similar to the following appear in the log during your deployments. These show the state updates Octopus is sending through to Jira, and if you expand them the details include information about the Jira issues for traceability. :::figure ![Deployment task log](/docs/img/releases/issue-tracking/images/deploy-task-log-green.png) ::: :::div{.hint} You must [configure your build server](#configure-your-build-server) to push commit information to Octopus. Without this, Octopus will not attempt to update Jira issues. ::: The following illustrates Octopus attempting to send an *in_progress*, and then a *successful*, state change to Jira. In this example, Octopus was unable to connect to Jira or send the state change, however, this does not impact the Octopus deployment itself, the deployment will still be considered a successful deployment. :::figure ![Deployment task log with warnings](/docs/img/releases/issue-tracking/images/deploy-task-log.png) ::: ## Troubleshooting {#troubleshooting} If you're running into issues with the Jira Integration, it's possible it could be one of the common problems we get asked about here. If it's still not working quite right, [we are here to help!](https://octopus.com/support) ### Issues after upgrading the Jira Plugin {#troubleshooting-jira-plugin-upgrades} :::div{.warning} **Change of functionality resulting in Data Loss** Please note: The reinstallation of the plugin has worked in the past to restore functionality of the integration. From April 2022 performing the below step will remove all Historical Deployment Information from Jira Cloud. We are waiting on information from Atlassian to confirm if this is a permanent or temporary feature. Please contact for the latest information regarding this issue. ::: If you find a previously working Jira integration has stopped working after upgrading the Jira plugin, then you may need to uninstall and reinstall the Jira plugin from the Atlassian marketplace. During configuration of the reinstalled Jira plugin, you will be provided with a new Jira Connect Password which will need to be entered into the Jira Settings page on your Octopus Server. ### Map Jira environments to Octopus environments {#troubleshooting-map-your-environments} If your deployments aren't being displayed in Jira, this likely means you will need to double-check that your Octopus environments are correctly mapped to your Jira environments. Navigate to **Infrastructure ➜ Environments**, and next to each environment click on the overflow menu (`...`) and click **Edit**. From here, you can map each Octopus environment to your corresponding Jira environment. ### Ensure casing on issue/work item IDs match {#troubleshooting-check-case-on-ids} The commits that are pushed to Octopus as build information need to have the exact same case as the issue/work item found in Jira. For example, if the work item in Jira is `OBJ-123`, but your commit message or branch name includes the work item as `obj-123` (notice the lower-case value) you will need to remediate the case in your commits. This will allow the deployment status update to appear in Jira successfully. ### Push build information before creating a release {#troubleshooting-push-build-info-first} If you push build information to Octopus after a release is created, the build information won't be included in the release. This is because the information is included in the release snapshot. To ensure your release contains any build information, push the build information *before* you create a release. If you have a [built-in package repository trigger](/docs/projects/project-triggers/built-in-package-repository-triggers) (formerly Automatic release creation) enabled for a specific package step, you will need to push build information *before* you push the configured package to the built-in repository. ### Check the entire package ID {#troubleshooting-check-the-entire-package-id} If you find your work items or other build information aren't showing up in your releases, make sure your package ID as shown in the release is the exact same as it is found in the **Library ➜ Build Information** section. Some package ID values, particularly those found in external feeds must include the repository. For example, if you were pushing build information for the Docker image `octopusdeploy/worker-tools`, the value for the package ID needs to include the repository name of `octopusdeploy/` as well as the name of the Docker image, not just `worker-tools`. ### Check the package ID is not dynamically generated {#troubleshooting-check-dynamic-package-id} Build information and work items may not appear in a release or deployment if you [dynamically select a package ID at deployment time](/docs/deployments/packages/dynamically-selecting-packages). See the [dynamic package tradeoffs](/docs/deployments/packages/dynamically-selecting-packages#dynamic-packages-and-issue-trackers) section for more information. ## Learn more - [Jira blog posts](https://octopus.com/blog/tag/jira/1) # Config as Code runbooks Source: https://octopus.com/docs/runbooks/config-as-code-runbooks.md Config as Code (or CaC) Runbooks stores your runbook process as code in your project repository. This means that you can now use version control to track changes to your runbook processes alongside changes to your project code. You may already be using CaC to store your Octopus deployment process, deployment settings and non-sensitive variables. Adding CaC Runbooks completes that picture. The configuration for your runbooks, as well as your deployment process, settings and variables, are stored as OCL files which you can edit in Octopus or directly in your IDE. Using CaC Runbooks is currently optional. You can also choose to keep using Runbooks as you always have on your existing version controlled projects and on projects without version control. ## CaC Runbooks on a new version controlled project If you're [creating a new version controlled project](/docs/projects/version-control/converting#creating-a-new-version-controlled-project) or [adding version control to an existing project](/docs/projects/version-control/converting#configuring-an-existing-project-to-use-git), CaC Runbooks will be automatically enabled on your project. ## CaC Runbooks on an existing version controlled project :::div{.info} Converting a project to use CaC Runbooks is a one-way change. Once you convert a project, you **cannot** convert it back. Please make sure you want to do this, and consider cloning your project to test how it works, so that you know what to expect before converting important projects. ::: You can migrate an existing version controlled project to use CaC Runbooks by clicking on the 'Store Runbooks in Git' banner at the top of the **Runbooks** page of your project and following the prompts. Once that's done, you should see a branch selector at the top of the **Runbooks** page, and a new 'runbooks/' directory in your repository alongside your existing OCL files. (See the '.octopus/ directory' of your repository project repository.) ### Troubleshooting **Slug related errors during migration** Published runbook snapshots must have unique step slugs. Step slugs were added to Octopus in 2022. If you published snapshots before this feature was added, those snapshots may contain empty or duplicate slugs. To identify and fix these issues: 1. Use these scripts to check for problematic slugs in your runbooks: - [Check for blank slugs in runbook snapshots](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Runbooks/CheckForBlankSlugsInFrozenSnapshots.ps1) - [Check for duplicate slugs in runbook snapshots](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Runbooks/CheckForDuplicateSlugs.ps1) 2. For any affected runbooks, ensure all steps have unique slugs 3. Publish new snapshots of the affected runbooks 4. Retry the runbook migration ## Drafts vs branches One of the exciting things about CaC is that it allows you to edit your runbooks over as many branches as you would like, creating as many copies of each runbook as you have branches. This means that we no longer need 'draft' runbooks. When you convert your project to use CaC Runbooks, only published runbooks will be available in Octopus as CaC Runbooks. Draft runbooks will still be converted to code. They can be found in the 'runbooks/migrated-drafts/' directory alongside the other runbooks OCL files in your 'runbooks/' directory. To access your draft runbooks in Octopus, you can simply move their OCL files up to the 'runbooks/' folder. But first, it's important to consider how CaC Runbooks uses branches to handle permissions. ## Permissions by branch When converting your project to CaC, you specify a 'default' branch to contain the approved versions of your OCL files. Other branches can be thought of as containing restricted versions of your runbooks. These may be unfinished runbooks or runbooks which you want to place extra permissions around. You also have the option to specify 'protected' branches. Protected branches cannot be changed from within Octopus. Consider marking any branch which you would normally follow a PR review process to update as protected, including your default branch. Octopus provides [two built in roles](/docs/runbooks/runbook-permissions) to help you to manage permissions around editing and running runbooks: 'Runbook Consumer' and 'Runbook Producer'. #### Runbook Consumer: - Non-CaC Runbooks - Runbook Consumers cannot edit runbooks and can only run published runbooks. - CaC Runbooks - Runbook Consumers cannot edit runbooks and can only run runbooks from the latest commit on the default branch. #### Runbook Producer: - Non-CaC Runbooks - Runbook Producers can edit and run both draft and published runbooks. - CaC Runbooks - Runbook Producers can edit runbooks on any unprotected branches and can run runbooks from any commit on any branch. Assuming your default branch is protected, this means that your old 'published' runbooks are equivalent to the runbooks in the latest commit on your default branch, and your old 'draft' runbooks are equivalent to the runbooks on any other commit on any branch. 💡 If you are using Octopus's built in roles, keep these permissions in mind when moving your draft runbooks out of the 'migrated-drafts/' folder and consider storing these runbooks on a non-default branch. ## Snapshots vs commits Another exciting thing about CaC Runbooks is that every revision to your runbook process, settings and variables is captured in your commit history. This means that you can now re-run any previous version of your runbook without the need for snapshots. To re-run a previous version of a CaC Runbook, simply enter a commit hash on the branch selector at the top of the **Runbooks** page then run from that commit. If you are using the Octopus built in roles, this will require the Runbook Producer role. The information that was previously found on the **Snapshot** page is still available on the **Details** tab of each runbook run. 💡 Rather than setting package versions in your runbook snapshots, you can now specify fixed package versions inside your runbook steps and store these in CaC alongside the rest of your runbook process. ## Scheduled triggers [Runbook triggers](/docs/runbooks/scheduled-runbook-trigger) will always run CaC Runbooks from the latest commit on your default branch, just as non-CaC runbook triggers will only run published runbooks. :::div{.hint} If you have steps that use packages in your runbook process we only support getting latest non-prerelease versions. To use prerelease packages you would need to hard-code the version on individual steps. ::: ## Custom automated scripts If you use automated scripts that run runbooks via the Octopus Server API and you convert your runbooks to Config As Code the URL for the runbook will change to include a branch reference (e.g. `refs/heads/main`) as a result you need to update your scripts to include the branch reference where the runbook is stored. - [PowerShell example](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Runbooks/RunConfigAsCodeRunbook.ps1) ## Deleting required resources Once your Runbooks are version controlled, it's up to you to take care to avoid deleting any Octopus resources required by your Runbooks. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. # Change AWS load balancer target group Source: https://octopus.com/docs/runbooks/runbook-examples/aws/change-load-balancer-group.md AWS [Elastic Load Balancing (ELB)](https://aws.amazon.com/elasticloadbalancing/) offers the ability to load balance traffic across AWS and on-premises resources using the same load balancer. Using a runbook, Octopus makes it easy to provide an automated method for modifying an AWS Elastic load balancer. This is particularly useful if you are deploying using the [blue-green](/docs/deployments/patterns/blue-green-deployments-with-octopus) deployment pattern, as you can change the load balancer automatically to direct traffic to a different set of servers when you switch to your new active environment. In this example, we'll swap out servers that are being used in an AWS Elastic load-balancer by modifying the configured listener to forward traffic to a new target group. ## Create the runbook :::div{.hint} This example assumes that you already have an ELB configured with a [listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html), and at least one [target group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html). These resources will be needed for the runbook. ::: 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add a **Run an AWS CLI script** step, and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Choose whether to use the bundled **AWS Tools**, or the ones pre-installed on the worker. 1. Choose the **AWS Account** to use. 1. In the **Amazon Web Services Account** section select the variable that references the **AWS Account** or choose to execute using a service role assigned to the EC2 instance. If you don't have an **AWS Account Variable** yet, check our [documentation on how to create one](/docs/projects/variables/aws-account-variables). :::figure ![AWS Account](/docs/img/runbooks/runbook-examples/aws/images/step-aws-account.png) ::: The supplied account can optionally be used to assume a different AWS service role. This can be used to run the AWS commands with a role that limits the services that can be affected. :::figure ![AWS Role](/docs/img/runbooks/runbook-examples/aws/images/step-aws-role.png) ::: :::div{.hint} If you select **Yes** to **Execute using the AWS service role for an EC2 instance**, you do not need an AWS account or account variable. Instead, the AWS service role for the EC2 instance executing the deployment will be used. See the [AWS documentation](https://oc.to/AwsDocsRolesTermsAndConcepts) for more information on service roles. ::: 9. In the **Inline source code** section, add the following code as a **PowerShell** script: ```powershell $listenerArn = $OctopusParameters["Project.AWS.ALB.ListenerArn"] $targetGroup = $OctopusParameters["Project.AWS.ALB.TargetArn"] Write-Host "Modifying AWS ELB listener: $listenerArn to forward to targetGroup: $targetGroup" aws elbv2 modify-listener --listener-arn $listenerArn --default-actions Type=forward,TargetGroupArn=$targetGroup ``` The script will modify the ELB listener specified in the `Project.AWS.ALB.ListenerArn` variable to forward traffic to the target group specified in the `Project.AWS.ALB.TargetArn` variable. Configure any other settings for the step and click **Save**, and in just a few steps, we've created a runbook to automate the modification of an AWS Elastic load balancer to change its target group. ## Samples We have a [Pattern - Blue-Green](https://oc.to/PatternBlueGreenSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this runbook example named `Change Production Group` in the `Random Quotes Java` project. # Provision an Azure App Service Source: https://octopus.com/docs/runbooks/runbook-examples/azure/provision-app-service.md One of the most convenient aspects of Platform as a Service (PaaS) is the ability to spin up and tear down resources quickly. This ability can be used for a number of different reasons: feature branching, testing, cost savings, etc... You can use runbooks in Octopus to spin up resources in Azure. To provision an Azure App Service, there are a couple of things that need to be in place: - Resource group - App Service Plan :::div{.hint} We recommend grouping the resources for testing a feature branch into their own Azure Resource Group. Doing this makes it easier to make sure you destroy all the resources you created by simply deleting the resource group itself. ::: ## Create the runbook :::div{.hint} A quick way to create the App Service Plan is go use the Azure Portal UI to begin the creation process, and export the App Plan as an Azure Resource Manager (ARM) template and use that as a basis to start from. ::: 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a **Run an Azure script** step. 5. Create an Azure Resource Group using the following code: ```powershell $resourceGroupName = $OctopusParameters["Azure.ResourceGroup.Name"] $resourceGroupLocation = $OctopusParameters["Azure.Location.Abbr"] if ((az group exists --name $resourceGroupName) -eq $false) { Write-Output "Creating resource group $resourceGroupName in $resourceGroupLocation" az group create --location $resourceGroupLocation --name $resourceGroupName --tags "Space=#{Octopus.Space.Name}" "Environment=Space" } ``` 6. Add a **Deploy an Azure Resource Manager Template** step. 7. Add the template code (example below): ``` { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "name": { "type": "string" }, "location": { "type": "string" }, "sku": { "type": "string" }, "skucode": { "type": "string" }, "workerSize": { "type": "string" }, "workerSizeId": { "type": "string" }, "numberOfWorkers": { "type": "string" } }, "resources": [ { "apiVersion": "2018-11-01", "name": "[parameters('name')]", "type": "Microsoft.Web/serverfarms", "location": "[parameters('location')]", "kind": "", "tags": {}, "properties": { "name": "[parameters('name')]", "workerSize": "[parameters('workerSize')]", "workerSizeId": "[parameters('workerSizeId')]", "numberOfWorkers": "[parameters('numberOfWorkers')]", "reserved": false }, "sku": { "Tier": "[parameters('sku')]", "Name": "[parameters('skuCode')]" } } ] } ``` Fill in the parameters from the template: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | name | Name of the App Service Plan | ASP-#{Octopus.Space.Name} | | location | The region the service plan will be in | centralus | | sku | The SKU name for your plan | Free | | skucode | The SKU code for the plan | F1 | | workerSize | Scaling worker size | 0 | | workerSizeId | Scaling worker size Id | 0 | | numberOfWorkers | Number of workers | 1 | With the Resource Group and App Service Plan created, you can create an Azure Web App target. 8. Add a **Deploy an Azure Resource Manager Template** step. 9. Add the template code (example below): ``` { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "name": { "type": "string" }, "location": { "type": "string" }, "hostingPlanName": { "type": "string" }, "serverFarmResourceGroup": { "type": "string" }, "alwaysOn": { "type": "bool" }, "currentStack": { "type": "string" }, "phpVersion": { "type": "string" }, "errorLink": { "type": "string" } }, "resources": [ { "apiVersion": "2018-11-01", "name": "[parameters('name')]", "type": "Microsoft.Web/sites", "location": "[parameters('location')]", "tags": {}, "dependsOn": [], "properties": { "name": "[parameters('name')]", "siteConfig": { "appSettings": [ { "name": "ANCM_ADDITIONAL_ERROR_PAGE_LINK", "value": "[parameters('errorLink')]" } ], "metadata": [ { "name": "CURRENT_STACK", "value": "[parameters('currentStack')]" } ], "phpVersion": "[parameters('phpVersion')]", "alwaysOn": "[parameters('alwaysOn')]" }, "serverFarmId": "[concat('/subscriptions/', subscription().subscriptionId,'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]", "clientAffinityEnabled": true } } ] } ``` Fill in the parameters from the template: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | name | Name of the web app | OctPetShop-Web | | location | Region of the web app | centralus | | hostingPlanName | Name of the hosting plan to use | ASP-#{Octopus.Space.Name} (name from above) | | serverFarmResourceGroup | Name of the resource group to use | #{Azure.ResourceGroup.Name} | | alwaysOn | Whether you want to configure Always On | false | | currentStack | Name of the stack to use | dotnetcore | | phpVersion | Version of PHP | OFF | | errorLink | Uri of the error link | https://s-octopetshop.scm.azurewebsites.net/detectors?type=tools&name=eventviewer | 10. Add a **Run a script** step to register the Azure Web App as a target: ```powershell # Define parameters $baseUrl = $OctopusParameters['Global.Base.Url'] $apiKey = $OctopusParameters['Global.Api.Key'] $spaceId = $OctopusParameters['Octopus.Space.Id'] $spaceName = $OctopusParameters['Octopus.Space.Name'] $environmentName = $OctopusParameters['Octopus.Environment.Name'] $environmentId = $OctopusParameters['Octopus.Environment.Id'] $azureAccount = $OctopusParameters['Azure.Account.Name'] $name = $OctopusParameters['Project.WebApp.Name'] $resourceGroupName = $OctopusParameters['Azure.ResourceGroup.Name'] # Get default machine policy $machinePolicy = (Invoke-RestMethod -Method Get -Uri "$baseUrl/api/$spaceId/machinepolicies/all" -Headers @{"X-Octopus-ApiKey"="$apiKey"}) | Where-Object {$_.Name -eq "Default Machine Policy"} # Build JSON payload $jsonPayload = @{ Id = $null MachinePolicyId = $machinePolicy.Id Name = $name IsDisabled = $false HealthStatus = "Unknown" HasLatestCalamari = $true StatusSummary = $null IsInProcess = $true EndPoint = @{ Id = $null CommunicationStyle = "AzureWebApp" Links = $null AccountId = $azureAccount ResourceGroupName = $resourceGroupName WebAppName = $name } Links = $null TenantedDeploymentParticipation = "Untenanted" Roles = @( "OctoPetShop-Web" ) EnvironmentIds = @( $environmentId ) TenantIds = @() TenantTags = @() } # Register the target to Octopus Deploy Invoke-RestMethod -Method Post -Uri "$baseUrl/api/$spaceId/machines" -Headers @{"X-Octopus-ApiKey"="$apiKey"} -Body ($jsonPayload | ConvertTo-Json -Depth 10) ``` 11. Add another **Run a script** step to force a health check: ```powershell # Define parameters $baseUrl = $OctopusParameters['Global.Base.Url'] $apiKey = $OctopusParameters['Global.Api.Key'] $spaceId = $OctopusParameters['Octopus.Space.Id'] $spaceName = $OctopusParameters['Octopus.Space.Name'] $environmentName = $OctopusParameters['Octopus.Environment.Name'] $name = $OctopusParameters['Project.WebApp.Name'] # Get worker $machine = (Invoke-RestMethod -Method Get -Uri "$baseUrl/api/$spaceId/machines/all" -Headers @{"X-Octopus-ApiKey"="$apiKey"}).Items | Where-Object {$_.Name -eq "$name"} # Build payload $jsonPayload = @{ Name = "Health" Description = "Check $spaceName-$environmentName health" Arguments = @{ Timeout = "00:05:00" MachineIds = @( $machine.Id ) OnlyTestConnection = "false" } SpaceId = "$spaceId" } # Execute health check Invoke-RestMethod -Method Post -Uri "$baseUrl/api/tasks" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers @{"X-Octopus-ApiKey"="$apiKey"} ``` Forcing the health check like this will allow you to immediately deploy to your target if it is included in your process. ## Samples We have a [Target - Hybrid](https://oc.to/TargetHybridSampleSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `Space Infrastructure` project. # Restore SQL database Source: https://octopus.com/docs/runbooks/runbook-examples/databases/restore-mssql-database.md Restoring databases is a common practice in most organizations. Using a Runbook in Octopus can make this process easy and simple allowing you to restore backups ad-hoc or according to a [scheduled trigger](/docs/runbooks/scheduled-runbook-trigger). ## Permissions In this example, you will restore a Microsoft SQL Server database using a step template from our [community library](/docs/projects/community-step-templates) called [SQL - Restore Database](https://library.octopus.com/step-templates/469b6d9d-761a-4f94-9745-20e9c2f93841/actiontemplate-sql-restore-database). This template supports both: - SQL authentication. - Integrated authentication. In this example, we'll use SQL authentication and provide both a SQL username and password. It's important to check that you have the correct permissions to perform the backup. You can find more information about this in the [permissions documentation](/docs/deployments/databases/sql-server/permissions). ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the Runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a new step template from the community library called **SQL - Restore Database**. 5. Fill out all the parameters in the step. It's best practice to use [variables](/docs/projects/variables) rather than entering the values directly in the step parameters. | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Server | Name database server | SQLserver1 | | Database | Name of the database to restore | MyDatabase | | Backup Directory | Location of where the backup file resides | `\\\mybackupserver\backupfolder` | | SQL login | Name of the SQL Account to use (leave blank for Integrated Authentication) | MySqlLogin | | SQL password | Password for the SQL Account | MyPassword | | Compression Option | Use compression for this backup | Enabled | | Devices | The number of backup devices to use for the backup | 1 | | Backup file suffix | Specify a suffix to add to the backup file names. If left blank, the current date, in the format given by the DateFormat parameter, is used | ProdRestore | | Separator | Separator used between database name and suffix | _ | | Date Format | Date format to use if backup is suffixed with a date stamp (e.g. yyyy-MM-dd) | yyyy-MM-dd | :::div{.warning} Use variables where possible so you can assign scopes to values. This will ensure credentials and database connections are correct for the environment you're deploying to. ::: After adding all required parameters, click **Save**, and you have a basic runbook to restore your SQL database. You can also add additional steps to add security to your runbooks, such as a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step for business approvals. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. ## Learn more - [SQL Backup - Community Step template](https://library.octopus.com/step-templates/34b4fa10-329f-4c50-ab7c-d6b047264b83/actiontemplate-sql-backup-database) # Emergency operations Source: https://octopus.com/docs/runbooks/runbook-examples/emergency.md Power outages, natural disasters, and human error can have crippling impacts on your online business. Octopus Deploy runbooks offer an automated solution to perform emergency operations. Emergency operations runbooks can help with: - Failing over DNS to a DR site. - Restoring a database. - Switching application slots for and Azure Web Application. # Automatically failover DNS with monitoring Source: https://octopus.com/docs/runbooks/runbook-examples/emergency/monitor-failover-dns.md Runbooks can be executed on a recurring schedule called a [trigger](/docs/runbooks/scheduled-runbook-trigger). Using this feature, you can have a runbook execute periodically to ensure that your application is up and running, then automatically failover if it is not. The following example tests the URL of an application, if the expected `200` code is not returned, the runbook will automatically start the DR web and SQL Server in Azure, then update the DNS to point to the DR site. ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. Add the following steps: - Community Step Template [HTTP - Test URL](https://library.octopus.com/step-templates/f5cebc0a-cc16-4876-9f72-bfbd513e6fdd/actiontemplate-http-test-url). - `Run an Azure Script` to start Web server; configured to run only on failure: ```powershell $name = $OctopusParameters["OctoFX.Azure.WebApp.Name"] $resourceGroup = $OctopusParameters["OctoFX.Azure.Resource.Group"] Write-Highlight "Starting $name" Start-AzureRmVM -ResourceGroupName $resourceGroup -Name $name ``` - `Run an Azure Script` to start the database server; configured to run only on failure and in parallel with the web server step: ```powershell $name = $OctopusParameters["OctoFX.Azure.SQL.Name"] $resourceGroup = $OctopusParameters["OctoFX.Azure.Resource.Group"] Start-AzureRmVM -ResourceGroupName $resourceGroup -Name $name ``` - `Run an Azure Script` to update the DNS entry; configured to run only on failure: ```powershell $resourceGroup = $OctopusParameters["OctoFX.Azure.Resource.Group"] $zoneName = $OctopusParameters["OctoFX.DNS.Name"] $ipAddressDR = $OctopusParameters["OctoFX.DR.IP.Address"] $ipAddressProd = $OctopusParameters["OctoFX.Production.IP.Address"] az network dns record-set a add-record --resource-group $resourceGroup --zone-name $zoneName --record-set-name www --ipv4-address $ipAddressProd az network dns record-set a remove-record --resource-group $resourceGroup --zone-name $zoneName --record-set-name www --ipv4-address $ipAddressDR ``` :::figure ![](/docs/img/runbooks/runbook-examples/emergency/octopus-runbook-app-monitoring.png) ::: ## Create the trigger 1. To create a trigger, navigate to **Project ➜ Operations ➜ Triggers ➜ Add Scheduled Trigger**. 2. Give the trigger a name and a description 3. Fill in Trigger Action section: - Runbook: Select the runbook to execute. - Target environments: Select the environment(s) this runbook will execute against. 4. Fill in the **Trigger Schedule** section: - Schedule: Daily | Days per month | Cron expression. 5. Scheduled Timezone: - Select timezone: Select the timezone to use when evaluating when to run. :::figure ![](/docs/img/runbooks/runbook-examples/emergency/octopus-runbook-trigger.png) ::: ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. # IIS Runbooks Source: https://octopus.com/docs/runbooks/runbook-examples/routine/iis-runbooks.md For many organizations, IIS remains an essential piece of software for running their web-applications. Managing IIS can often be challenging in an environment where you have a large estate of machines and need to carefully control who can access those machines. You can create a runbook to execute as part of a routine operations task to manage your IIS websites without ever needing someone to log in. This next section shows how you can create runbooks to complete the following tasks as part of your routine operations: - [Install IIS runbook](#install-iis-runbook) - [Additional IIS features](#install-additional-features) - [Start application pool runbook](#start-app-pool) - [Stop application pool runbook](#stop-app-pool) - [Restart application pool runbook](#restart-app-pool) - [Restart website runbook](#restart-website) - [Delete website runbook](#delete-website) - [Optional Approvals](#optional-approvals) - [Harden IIS](#harden-iis) - [Create the runbook](#create-the-runbook) - [Samples](#samples) - [Learn more](#learn-more) ## Install IIS runbook \{#install-iis-runbook} To create a runbook to install IIS: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a Name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, and then click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```ps if ((Get-WindowsFeature Web-Server).InstallState -eq "Installed") { Write-Host "IIS is installed" } else { Write-Host "IIS is not installed and proceeding with Install" Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebServerRole Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebServer } ``` The script checks to see if IIS is already installed by inspecting the `InstallState` for the `Web-Server` feature. If it's installed it will skip the install of IIS. :::div{.hint} **Execution Policy:** It's possible you may need to set the [Execution policy](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy) to an appropriate value (as part of the script) in order for it to run successfully. ::: ### Additional IIS features \{#install-additional-features} There are over 25 additional IIS features you could choose to install as part of your runbook. To list all IIS Windows features, run the following PowerShell: ```ps Get-WindowsOptionalFeature -Online | where FeatureName -like 'IIS-*' ``` The following code installs all additional features found from the previous `Get-WindowsOptionalFeature` command using the [Enable-WindowsOptionalFeature](https://docs.microsoft.com/en-us/powershell/module/dism/enable-windowsoptionalfeature?view=win10-ps) PowerShell cmdlet: ```ps Enable-WindowsOptionalFeature -Online -FeatureName IIS-CommonHttpFeatures Enable-WindowsOptionalFeature -Online -FeatureName IIS-HttpErrors Enable-WindowsOptionalFeature -Online -FeatureName IIS-HttpRedirect Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment Enable-WindowsOptionalFeature -online -FeatureName NetFx4Extended-ASPNET45 Enable-WindowsOptionalFeature -Online -FeatureName IIS-NetFxExtensibility45 Enable-WindowsOptionalFeature -Online -FeatureName IIS-HealthAndDiagnostics Enable-WindowsOptionalFeature -Online -FeatureName IIS-HttpLogging Enable-WindowsOptionalFeature -Online -FeatureName IIS-LoggingLibraries Enable-WindowsOptionalFeature -Online -FeatureName IIS-RequestMonitor Enable-WindowsOptionalFeature -Online -FeatureName IIS-HttpTracing Enable-WindowsOptionalFeature -Online -FeatureName IIS-Security Enable-WindowsOptionalFeature -Online -FeatureName IIS-RequestFiltering Enable-WindowsOptionalFeature -Online -FeatureName IIS-Performance Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebServerManagementTools Enable-WindowsOptionalFeature -Online -FeatureName IIS-IIS6ManagementCompatibility Enable-WindowsOptionalFeature -Online -FeatureName IIS-Metabase Enable-WindowsOptionalFeature -Online -FeatureName IIS-ManagementConsole Enable-WindowsOptionalFeature -Online -FeatureName IIS-BasicAuthentication Enable-WindowsOptionalFeature -Online -FeatureName IIS-WindowsAuthentication Enable-WindowsOptionalFeature -Online -FeatureName IIS-StaticContent Enable-WindowsOptionalFeature -Online -FeatureName IIS-DefaultDocument Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebSockets Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationInit Enable-WindowsOptionalFeature -Online -FeatureName IIS-ISAPIExtensions Enable-WindowsOptionalFeature -Online -FeatureName IIS-ISAPIFilter Enable-WindowsOptionalFeature -Online -FeatureName IIS-HttpCompressionStatic Enable-WindowsOptionalFeature -Online -FeatureName IIS-ASPNET45 ``` ## Start application pool runbook \{#start-app-pool} To create a runbook to start your IIS application pool: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a name and click **SAVE**. 1. Add the community step template called [IIS AppPool - Start](https://library.octopus.com/step-templates/9db77671-0fe3-4aef-a014-551bf1e5e7ab/actiontemplate-iis-apppool-start), and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Fill out the only required parameter: **Application Pool name**. :::div{.hint} We recommend using [variables](/docs/projects/variables) where appropriate, rather than entering values directly in the step parameters. ::: Optionally, configure any [conditions](/docs/projects/steps/conditions) for the step, click **Save**, and you have a runbook step to start an IIS Application Pool. :::figure ![Runbook IIS maintenance Start App-Pool](/docs/img/runbooks/runbook-examples/routine/images/iis-maintenance-start-app-pool.png) ::: ## Stop application pool runbook \{#stop-app-pool} To create a runbook to stop your application pool: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a Name and click **SAVE**. 1. Add the community step template called [IIS AppPool - Stop](https://library.octopus.com/step-templates/3aaf34a5-90eb-4ea1-95db-15ec93c1e54d/actiontemplate-iis-apppool-stop), and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Fill out all required parameters in the step, using [variables](/docs/projects/variables) where appropriate: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Application Pool Name | The name of the application pool in IIS. | AppPool-01 | | Status check interval | The delay in milliseconds, between each attempt to query the application pool to see if has stopped. | 500 | | Status check retries | The number of retries before an error is thrown. | 10 | Configure any other settings for the step and click **Save**, and you have a runbook step to stop an IIS Application Pool. :::figure ![Runbook IIS maintenance Stop App-Pool](/docs/img/runbooks/runbook-examples/routine/images/iis-maintenance-stop-app-pool.png) ::: ## Restart application pool runbook \{#restart-app-pool} To create a runbook to restart your IIS application pool: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a name and click **SAVE**. 1. Add the community step template called [IIS AppPool - Restart](https://library.octopus.com/step-templates/de4a85ca-38cc-4a30-8244-64612e3a7921/actiontemplate-iis-apppool-restart), and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Fill out the only required parameter: **Application pool name**, using a [variable](/docs/projects/variables) if appropriate. Configure any other settings for the step and click **Save**, and you have a runbook step to restart an IIS Application Pool. :::figure ![Runbook IIS maintenance Restart App-Pool](/docs/img/runbooks/runbook-examples/routine/images/iis-maintenance-restart-app-pool.png) ::: ## Restart website runbook \{#restart-website} To create a runbook to restart your IIS websites: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a name and click **SAVE**. 1. Add the community step template called [IIS Website - Restart](https://library.octopus.com/step-templates/6a17bd83-ef96-4c22-b212-91a89ca92fe6/actiontemplate-iis-website-restart), and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Fill out the only required parameter: **Website name**, using a [variable](/docs/projects/variables) if appropriate. Configure any other settings for the step and click **Save**, and you have a runbook step to restart an IIS website. :::figure ![Runbook IIS maintenance Restart Website](/docs/img/runbooks/runbook-examples/routine/images/iis-maintenance-restart-website.png) ::: ## Delete website runbook \{#delete-website} To create a runbook to delete your IIS websites: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a name and click **SAVE**. 1. Add the community step template called [IIS Website - Delete](https://library.octopus.com/step-templates/a032159b-0742-4982-95f4-59877a31fba3/actiontemplate-iis-website-delete), and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Fill out the only required parameter: **Website name**, using a [variable](/docs/projects/variables) if appropriate. Configure any other settings for the step and click **Save**, and you have a runbook step to delete an IIS website. :::figure ![Runbook IIS maintenance Delete Website](/docs/img/runbooks/runbook-examples/routine/images/iis-maintenance-delete-website.png) ::: ## Optional Approvals You can also include additional steps to your runbook to include another layer of protection, such as a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step for business approvals. ## Harden IIS Your publicly available servers need to be as secure as you can make them. Hackers are constantly finding new exploits so maintaining your security posture is a must. With Octopus Deploy runbooks, you can define a single process that can harden your IIS installations according to [NIST guidelines](https://nvd.nist.gov/ncp/checklist/759) at the click of a button. :::div{.warning} Every installation is different and the examples provided here are only intended to demonstrate functionality. Ensure you are complying with your company's security policies when you configure any infrastructure and that your specific implementation matches your needs. ::: ### Create the runbook To create a runbook to harden your IIS server: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a Name and click **SAVE**. 1. Add a **Run a Script** step and past in the following example PowerShell: :::div{.warning} The following script makes a number of registry changes and alterations to ciphers, key hashes, key exchange algorithms, and cipher suite ordering. Be sure to review the changes that it will implement before proceeding. ::: ```powershell function Set-IISSecurity { $appcmd = $($env:windir + "\system32\inetsrv\appcmd.exe") #remove IIS server information #http://stackoverflow.com/questions/1178831/remove-server-response-header-iis7/12615970#12615970 Write-Output 'Removing IIS and ASP.NET Server identification...' Write-Output '--------------------------------------------------------------------------------' & $appcmd set config -section:system.webServer/rewrite/outboundRules /+"[name='Remove_RESPONSE_Server']" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules "/[name='Remove_RESPONSE_Server'].patternSyntax:`"Wildcard`"" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules "/[name='Remove_RESPONSE_Server'].match.serverVariable:RESPONSE_Server" "/[name='Remove_RESPONSE_Server'].match.pattern:`"*`"" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules "/[name='Remove_RESPONSE_Server'].action.type:`"Rewrite`"" "/[name='Remove_RESPONSE_Server'].action.value:`" `"" /commit:apphost & $appcmd set config /section:httpProtocol "/-customHeaders.[name='X-Powered-By']" #Enable HTTPS only redirect and add HSTS header #https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#IIS #Set HTTPS Only redirect Write-Output 'Setting HTTPS Only' Write-Output '--------------------------------------------------------------------------------' & $appcmd set config -section:system.webServer/rewrite/rules /+"[name='HTTPS_301_Redirect',stopProcessing='False']" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/rules "/[name='HTTPS_301_Redirect',stopProcessing='False'].match.url:`"(.*)`"" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/rules "/+[name='HTTPS_301_Redirect',stopProcessing='False'].conditions.[input='{HTTPS}',pattern='off']" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/rules "/[name='HTTPS_301_Redirect',stopProcessing='False'].action.type:`"Redirect`"" "/[name='HTTPS_301_Redirect',stopProcessing='False'].action.url:`"https://{HTTP_HOST}{REQUEST_URI}`"" /commit:apphost #HSTS header Write-Output 'Configuring HSTS header...' Write-Output '--------------------------------------------------------------------------------' #precondition for HSTS header & $appcmd set config -section:system.webServer/rewrite/outboundRules /+"preConditions.[name='USING_HTTPS']" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules /"+preConditions.[name='USING_HTTPS'].[input='{HTTPS}',pattern='on']" /commit:apphost #set header & $appcmd set config -section:system.webServer/rewrite/outboundRules /+"[name='Add_HSTS_Header',preCondition='USING_HTTPS']" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules "/[name='Add_HSTS_Header'].patternSyntax:`"Wildcard`"" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules "/[name='Add_HSTS_Header',preCondition='USING_HTTPS'].match.serverVariable:`"RESPONSE_Strict-Transport-Security`"" "/[name='Add_HSTS_Header',preCondition='USING_HTTPS'].match.pattern:`"*`"" /commit:apphost & $appcmd set config -section:system.webServer/rewrite/outboundRules "/[name='Add_HSTS_Header',preCondition='USING_HTTPS'].action.type:`"Rewrite`"" "/[name='Add_HSTS_Header',preCondition='USING_HTTPS'].action.value:`"max-age=31536000`"" /commit:apphost #prevent frame jacking #https://support.microsoft.com/en-us/kb/2694329 & $appcmd set config -section:httpProtocol "/+customHeaders.[name='X-Frame-Options',value='SAMEORIGIN']" #Improve SSL ciphers, add PFS, disable SSLv3 #https://www.hass.de/content/setup-your-iis-ssl-perfect-forward-secrecy-and-tls-12 Write-Output 'Configuring IIS with SSL/TLS Deployment Best Practices...' Write-Output '--------------------------------------------------------------------------------' # Disable Multi-Protocol Unified Hello New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\Multi-Protocol Unified Hello\Server' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\Multi-Protocol Unified Hello\Server' -name Enabled -value 0 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\Multi-Protocol Unified Hello\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null Write-Output 'Multi-Protocol Unified Hello has been disabled.' # Disable PCT 1.0 New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server' -name Enabled -value 0 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null Write-Output 'PCT 1.0 has been disabled.' # Disable SSL 2.0 (PCI Compliance) New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server' -name Enabled -value 0 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null Write-Output 'SSL 2.0 has been disabled.' # NOTE: If you disable SSL 3.0 the you may lock out some people still using # Windows XP with IE6/7. Without SSL 3.0 enabled, there is no protocol available # for these people to fall back. Safer shopping certifications may require that # you disable SSLv3. # # Disable SSL 3.0 (PCI Compliance) and enable "Poodle" protection New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server' -name Enabled -value 0 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null Write-Output 'SSL 3.0 has been disabled.' # Add and Enable TLS 1.0 for client and server SCHANNEL communications New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server' -name 'Enabled' -value 1 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null Write-Output 'TLS 1.0 has been enabled.' # Add and Enable TLS 1.1 for client and server SCHANNEL communications New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -Force | Out-Null New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'Enabled' -value 1 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'Enabled' -value 1 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null Write-Output 'TLS 1.1 has been enabled.' # Add and Enable TLS 1.2 for client and server SCHANNEL communications New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force | Out-Null New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -name 'Enabled' -value 1 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'Enabled' -value 1 -PropertyType 'DWord' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null Write-Output 'TLS 1.2 has been enabled.' # Re-create the ciphers key. New-Item 'HKLM:SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers' -Force | Out-Null # Disable insecure/weak ciphers. $insecureCiphers = @( 'DES 56/56', 'NULL', 'RC2 128/128', 'RC2 40/128', 'RC2 56/128', 'RC4 40/128', 'RC4 56/128', 'RC4 64/128', 'RC4 128/128' ) Foreach ($insecureCipher in $insecureCiphers) { $key = (Get-Item HKLM:\).OpenSubKey('SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers', $true).CreateSubKey($insecureCipher) $key.SetValue('Enabled', 0, 'DWord') $key.close() Write-Output "Weak cipher $insecureCipher has been disabled." } # Enable new secure ciphers. # - RC4: It is recommended to disable RC4, but you may lock out WinXP/IE8 if you enforce this. This is a requirement for FIPS 140-2. $secureCiphers = @( 'AES 128/128', 'AES 256/256' ) Foreach ($secureCipher in $secureCiphers) { $key = (Get-Item HKLM:\).OpenSubKey('SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers', $true).CreateSubKey($secureCipher) New-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\$secureCipher" -name 'Enabled' -value '0xffffffff' -PropertyType 'DWord' -Force | Out-Null $key.close() Write-Output "Strong cipher $secureCipher has been enabled." } # Set hashes configuration. New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\MD5' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\MD5' -name Enabled -value 0 -PropertyType 'DWord' -Force | Out-Null New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\SHA' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Hashes\SHA' -name Enabled -value '0xffffffff' -PropertyType 'DWord' -Force | Out-Null # Set KeyExchangeAlgorithms configuration. New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\KeyExchangeAlgorithms\Diffie-Hellman' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\KeyExchangeAlgorithms\Diffie-Hellman' -name Enabled -value '0xffffffff' -PropertyType 'DWord' -Force | Out-Null New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\KeyExchangeAlgorithms\PKCS' -Force | Out-Null New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\KeyExchangeAlgorithms\PKCS' -name Enabled -value '0xffffffff' -PropertyType 'DWord' -Force | Out-Null # Set cipher suites order as secure as possible (Enables Perfect Forward Secrecy). $cipherSuitesOrder = @( 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521', 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384', 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256', 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521', 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384', 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P521', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256', 'TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P521', 'TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384', 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P521', 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P384', 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256', 'TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P521', 'TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P384', 'TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521', 'TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384', 'TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256', 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P521', 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P384', 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P256', 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521', 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384', 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256', 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA256', 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA', 'TLS_DHE_DSS_WITH_AES_128_CBC_SHA256', 'TLS_DHE_DSS_WITH_AES_128_CBC_SHA', 'TLS_RSA_WITH_AES_256_CBC_SHA256', 'TLS_RSA_WITH_AES_256_CBC_SHA', 'TLS_RSA_WITH_AES_128_CBC_SHA256', 'TLS_RSA_WITH_AES_128_CBC_SHA' ) $cipherSuitesAsString = [string]::join(',', $cipherSuitesOrder) New-ItemProperty -path 'HKLM:\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL\00010002' -name 'Functions' -value $cipherSuitesAsString -PropertyType 'String' -Force | Out-Null Write-Output '--------------------------------------------------------------------------------' Write-Output 'NOTE: After the system has been rebooted you can verify your server' Write-Output ' configuration at https://www.ssllabs.com/ssltest/' Write-Output "--------------------------------------------------------------------------------`n" Write-Host -ForegroundColor Red 'A computer restart is required to apply settings. Restart computer now?' Restart-Computer -Force -Confirm } ``` After your IIS server has rebooted, your installation will be hardened against common attacks. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. ## Learn more - Generate an Octopus guide for [IIS and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=IIS). - [PowerShell and IIS: 20 practical examples blog post](https://octopus.com/blog/iis-powershell). # Provision AWS resources with Terraform Source: https://octopus.com/docs/runbooks/runbook-examples/terraform/provision-aws-with-terraform.md AWS CloudFormation is a great tool to use to provisions resources, however, it doesn't keep track of state. With runbooks, you can use Terraform to provision resources on AWS as well as keep them in the desired state. The following example will use Terraform to dynamically create worker machines based on auto-scaling rules. Instead of defining the Terraform template directly in the step template, this example will make use of a package. The package will consist of the following files: - autoscaling.tf - autoscaling-policy.tf - backend.tf - installTentacle.sh - provider.tf - securitygroup.tf - vars.tf - vpc.tf The different AWS resources types have been separated into their respective files to make it easier to maintain. ## autoscaling.tf This file contains the definitions for creating the auto-scaling configuration in AWS:
autoscaling.tf

``` resource "aws_launch_configuration" "dynamic-linux-worker-launch-config" { name_prefix = "dynamic-linux-worker-launch-config" image_id = "${var.LINUX_AMIS}" instance_type = "t2.micro" security_groups = ["${aws_security_group.allow-octopus-server.id}"] # script to run when created user_data = "${file("installTentacle.sh")}" } resource "aws_launch_configuration" "dynamic-windows-worker-launch-config" { name_prefix = "dynamic-windows-worker-launch-config" image_id = "${var.WINDOWS_AMIS}" instance_type = "t2.micro" security_groups = ["${aws_security_group.allow-octopus-server.id}"] user_data = <<-EOT EOT } resource "aws_autoscaling_group" "dynamic-linux-worker-autoscaling" { name = "dynamic-linux-worker-autoscaling" vpc_zone_identifier = ["${aws_subnet.worker-public-1.id}", "${aws_subnet.worker-public-2.id}", "${aws_subnet.worker-public-3.id}"] launch_configuration = "${aws_launch_configuration.dynamic-linux-worker-launch-config.name}" min_size = 2 max_size = 3 health_check_grace_period = 300 health_check_type = "EC2" force_delete = true tag { key = "Name" value = "Octopus Deploy Linux Worker" propagate_at_launch = true } } resource "aws_autoscaling_group" "dynamic-windows-worker-autoscaling" { name = "dynamic-windows-worker-autoscaling" vpc_zone_identifier = ["${aws_subnet.worker-public-1.id}", "${aws_subnet.worker-public-2.id}", "${aws_subnet.worker-public-3.id}"] launch_configuration = "${aws_launch_configuration.dynamic-windows-worker-launch-config.name}" min_size = 2 max_size = 3 health_check_grace_period = 300 health_check_type = "EC2" force_delete = true tag { key = "Name" value = "Octopus Deploy Windows Worker" propagate_at_launch = true } } ```

## autoscaling-policy.tf This file contains the policy definition that goes with the auto-scaling definition:
autoscaling-policy.tf

``` # scale up alarm resource "aws_autoscaling_policy" "linux-worker-cpu-policy" { name = "linux-worker-cpu-policy" autoscaling_group_name = "${aws_autoscaling_group.dynamic-linux-worker-autoscaling.name}" adjustment_type = "ChangeInCapacity" scaling_adjustment = "1" cooldown = "300" policy_type = "SimpleScaling" } resource "aws_cloudwatch_metric_alarm" "linux-worker-cpu-alarm" { alarm_name = "linux-worker-cpu-alarm" alarm_description = "linux-worker-cpu-alarm" comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = "2" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "120" statistic = "Average" threshold = "30" dimensions = { "AutoScalingGroupName" = "${aws_autoscaling_group.dynamic-linux-worker-autoscaling.name}" } actions_enabled = true alarm_actions = ["${aws_autoscaling_policy.linux-worker-cpu-policy.arn}"] } # scale down alarm resource "aws_autoscaling_policy" "linux-worker-cpu-policy-scale-down" { name = "linux-worker-cpu-policy-scale-down" autoscaling_group_name = "${aws_autoscaling_group.dynamic-linux-worker-autoscaling.name}" adjustment_type = "ChangeInCapacity" scaling_adjustment = "-1" cooldown = "300" policy_type = "SimpleScaling" } resource "aws_cloudwatch_metric_alarm" "linux-worker-cpu-alarm-scale-down" { alarm_name = "linux-worker-cpu-alarm-scale-down" alarm_description = "linux-worker-cpu-alarm-scale-down" comparison_operator = "LessThanOrEqualToThreshold" evaluation_periods = "2" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "120" statistic = "Average" threshold = "5" dimensions = { "AutoScalingGroupName" = "${aws_autoscaling_group.dynamic-linux-worker-autoscaling.name}" } actions_enabled = true alarm_actions = ["${aws_autoscaling_policy.linux-worker-cpu-policy-scale-down.arn}"] } resource "aws_autoscaling_policy" "windows-worker-cpu-policy" { name = "windows-worker-cpu-policy" autoscaling_group_name = "${aws_autoscaling_group.dynamic-windows-worker-autoscaling.name}" adjustment_type = "ChangeInCapacity" scaling_adjustment = "1" cooldown = "300" policy_type = "SimpleScaling" } resource "aws_cloudwatch_metric_alarm" "windows-worker-cpu-alarm" { alarm_name = "windows-worker-cpu-alarm" alarm_description = "windows-worker-cpu-alarm" comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = "2" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "120" statistic = "Average" threshold = "30" dimensions = { "AutoScalingGroupName" = "${aws_autoscaling_group.dynamic-windows-worker-autoscaling.name}" } actions_enabled = true alarm_actions = ["${aws_autoscaling_policy.windows-worker-cpu-policy.arn}"] } # scale down alarm resource "aws_autoscaling_policy" "windows-worker-cpu-policy-scale-down" { name = "windows-worker-cpu-policy-scale-down" autoscaling_group_name = "${aws_autoscaling_group.dynamic-windows-worker-autoscaling.name}" adjustment_type = "ChangeInCapacity" scaling_adjustment = "-1" cooldown = "300" policy_type = "SimpleScaling" } resource "aws_cloudwatch_metric_alarm" "windows-worker-cpu-alarm-scale-down" { alarm_name = "windows-worker-cpu-alarm-scale-down" alarm_description = "windows-worker-cpu-alarm-scale-down" comparison_operator = "LessThanOrEqualToThreshold" evaluation_periods = "2" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "120" statistic = "Average" threshold = "5" dimensions = { "AutoScalingGroupName" = "${aws_autoscaling_group.dynamic-windows-worker-autoscaling.name}" } actions_enabled = true alarm_actions = ["${aws_autoscaling_policy.windows-worker-cpu-policy-scale-down.arn}"] } ```

## backend.tf It is important to note that due to retention policy settings, the folder in which the package is extracted and run may not persist. For this reason, we recommend you store the state information in another location such as AWS S3:
backend.tf

``` terraform { backend "s3" { bucket = "#{Project.AWS.S3.Bucket}" key = "#{Project.AWS.S3.Key}" region = "#{Project.AWS.Region}" } } ```

## installTentacle.sh This contains a bash script to automatically install the Octopus Deploy Tentacle on the Linux EC2 instance being created:
installTentacle.sh

```bash #!/bin/bash serverUrl="#{Project.Octopus.Server.Url}" serverCommsPort="#{Project.Octopus.Server.PollingPort}" apiKey="#{Project.Octopus.Server.ApiKey}" name=$HOSTNAME configFilePath="/etc/octopus/default/tentacle-default.config" applicationPath="/home/Octopus/Applications/" workerPool="#{Project.Octopus.Server.WorkerPool}" machinePolicy="#{Project.Octopus.Server.MachinePolicy}" space="#{Project.Octopus.Server.Space}" sudo apt update && sudo apt install -y --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ sudo install -m 0755 -d /etc/apt/keyrings && \ curl -fsSL https://apt.octopus.com/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/octopus.gpg && \ sudo chmod a+r /etc/apt/keyrings/octopus.gpg && \ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/octopus.gpg] https://apt.octopus.com/ \ stable main" | \ sudo tee /etc/apt/sources.list.d/octopus.list > /dev/null && \ sudo apt update && sudo apt install -y tentacle # for legacy Ubuntu/Debian (< 18.04) use # sudo apt update && sudo apt install -y --no-install-recommends gnupg curl ca-certificates apt-transport-https && \ # curl -sSfL https://apt.octopus.com/public.key | sudo apt-key add - && \ # sudo sh -c "echo deb https://apt.octopus.com/ stable main > /etc/apt/sources.list.d/octopus.com.list" && \ # sudo apt update && sudo apt install -y tentacle sudo /opt/octopus/tentacle/Tentacle create-instance --config "$configFilePath" --instance "$name" sudo /opt/octopus/tentacle/Tentacle new-certificate --if-blank sudo /opt/octopus/tentacle/Tentacle configure --noListen True --reset-trust --app "$applicationPath" echo "Registering the worker $name with server $serverUrl" sudo /opt/octopus/tentacle/Tentacle register-worker --server "$serverUrl" --apiKey "$apiKey" --name "$name" --comms-style "TentacleActive" --server-comms-port $serverCommsPort --workerPool "$workerPool" --policy "$machinePolicy" --space "$space" sudo /opt/octopus/tentacle/Tentacle service --install --start ```

## provider.tf Contains the provider information for Terraform: ``` provider "aws" { region = "${var.AWS_REGION}" } ``` ## securitygroup.tf This contains the security group information for AWS:
securitygroup.tf

``` resource "aws_security_group" "allow-octopus-server" { vpc_id = "${aws_vpc.worker_vpc.id}" name = "allow-octopus-server" description = "Security group that allows traffic to the worker from the Octopus Server" egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 10933 to_port = 10933 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 3389 to_port = 3389 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "allow-octopus-server" } } ```

## vars.tf This contains the variables that are referenced in other files. Note there are Octostache (Octopus variable syntax) to make use of Octopus variables:
vars.tf

``` variable "AWS_REGION" { default = "#{Project.AWS.Region}" } variable "LINUX_AMIS" { default = "ami-084a6c14d8630bb68" } variable "WINDOWS_AMIS"{ default = "ami-087ee25b86edaf4b1" } variable "PATH_TO_PRIVATE_KEY" { default = "my_key" } variable "PATH_TO_PUBLIC_KEY" { default = "my_key.pub" } variable "INSTANCE_USERNAME" { default = "ubuntu" } ```

## vpc.tf This contains the definition of the VPC and other network resources that other AWS resources will use:
vpc.tf

``` # Internet VPC resource "aws_vpc" "worker_vpc" { cidr_block = "10.0.0.0/16" instance_tenancy = "default" enable_dns_support = "true" enable_dns_hostnames = "true" enable_classiclink = "false" tags = { Name = "worker_vpc" } } # Subnets resource "aws_subnet" "worker-public-1" { vpc_id = "${aws_vpc.worker_vpc.id}" cidr_block = "10.0.1.0/24" map_public_ip_on_launch = "true" availability_zone = "${var.AWS_REGION}a" tags = { Name = "worker-public-1" } } resource "aws_subnet" "worker-public-2" { vpc_id = "${aws_vpc.worker_vpc.id}" cidr_block = "10.0.2.0/24" map_public_ip_on_launch = "true" availability_zone = "${var.AWS_REGION}b" tags = { Name = "worker-public-2" } } resource "aws_subnet" "worker-public-3" { vpc_id = "${aws_vpc.worker_vpc.id}" cidr_block = "10.0.3.0/24" map_public_ip_on_launch = "true" availability_zone = "${var.AWS_REGION}c" tags = { Name = "worker-public-3" } } resource "aws_subnet" "worker-private-1" { vpc_id = "${aws_vpc.worker_vpc.id}" cidr_block = "10.0.4.0/24" map_public_ip_on_launch = "false" availability_zone = "${var.AWS_REGION}a" tags = { Name = "worker-private-1" } } resource "aws_subnet" "worker-private-2" { vpc_id = "${aws_vpc.worker_vpc.id}" cidr_block = "10.0.5.0/24" map_public_ip_on_launch = "false" availability_zone = "${var.AWS_REGION}b" tags = { Name = "worker-private-2" } } resource "aws_subnet" "worker-private-3" { vpc_id = "${aws_vpc.worker_vpc.id}" cidr_block = "10.0.6.0/24" map_public_ip_on_launch = "false" availability_zone = "${var.AWS_REGION}c" tags = { Name = "worker-private-3" } } # Internet GW resource "aws_internet_gateway" "worker-gw" { vpc_id = "${aws_vpc.worker_vpc.id}" tags = { Name = "worker" } } # route tables resource "aws_route_table" "worker-public" { vpc_id = "${aws_vpc.worker_vpc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.worker-gw.id}" } tags = { Name = "worker-public-1" } } # route associations public resource "aws_route_table_association" "worker-public-1-a" { subnet_id = "${aws_subnet.worker-public-1.id}" route_table_id = "${aws_route_table.worker-public.id}" } resource "aws_route_table_association" "worker-public-2-a" { subnet_id = "${aws_subnet.worker-public-2.id}" route_table_id = "${aws_route_table.worker-public.id}" } resource "aws_route_table_association" "worker-public-3-a" { subnet_id = "${aws_subnet.worker-public-3.id}" route_table_id = "${aws_route_table.worker-public.id}" } ```

## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add a **Apply a Terraform template** step. 1. Fill in the template properties - Template Source: File inside a package - Package: Choose the package which contains the files above With a single step in a runbook, you can create all the resources you need with Terraform. # Runbooks permissions Source: https://octopus.com/docs/runbooks/runbook-permissions.md Permissions are available to help you manage access to Runbooks, these include: | Permission | Description | | ------------- | ------------- | | RunbookView | You can view all things runbooks-related (from the runbooks themselves, to their process, runs and snapshots). | | RunbookEdit | You can edit all things runbooks-related. | | RunbookRunView | You can view runbook runs. | | RunbookRunDelete | You can delete runbook runs. | | RunbookRunCreate | You can create runbook runs (equivalent of `DeploymentCreate` in the deployment world). | You can limit your teams ability to create runbooks by disabling these permissions. There are roles we include out-of-the-box to encapsulate these new permissions: | Role | Description | | ------------- | ------------- | | Runbook producer | Runbook producers can view, edit and execute runbooks. This is useful for authors of runbooks, who need to edit, iterate-on, publish and execute their runbooks. | | Runbook consumer | Runbook consumers can view and execute runbooks. This is useful for users who are not authoring runbooks but need to view and run them. | ## Working with Runbooks via the Octopus API Octopus Deploy is built API-first, which means everything you can do through the Octopus UI can be done with the API. In the API, we model the runbook and its process the same way, starting at the project: - Project - Runbooks _(a project can have many runbooks, with RunbookView/RunbookEdit permissions.)_ - RunbookProcess _(a runbook has one process / collection of steps, with ProcessEdit permissions.)_ - RunbookSnapshots _(a runbook can have many snapshots, each with a unique name, with RunbookEdit permissions.)_ - RunbookRuns _(a runbook snapshot will then be run/executed against an environment, with RunbookRunCreate permissions.)_ We have provided lots of helpful functions for building your runbook process in the [.NET SDK](/docs/octopus-rest-api/octopus.client), or you can use the raw HTTP API if that suits your needs better. Learn about using the [Octopus REST API](/docs/octopus-rest-api). :::div{.success} Record the HTTP requests made by the Octopus UI to see how we build your runbook processes using the Octopus API. You can do this in the Chrome developer tools, or using a tool like Fiddler. ::: # Specify a custom container for AD authentication Source: https://octopus.com/docs/security/authentication/active-directory/custom-containers-for-ad-authentication.md In **Octopus 2.5.11** and newer you can now specify a custom container to use for AD Authentication. This feature addresses the issue of authenticating with Active Directory where the Users container is not in default location and permissions prevent queries as a result. Specifying the container will result in the container being used as the root of the context. The container is the distinguished name of a container object. All queries are performed under this root which can be useful in a more restricted environment. **Configure container example** ```powershell Octopus.Server.exe service --stop Octopus.Server.exe configure --activeDirectoryContainer "CN=Users,DC=GPN,DC=COM" Octopus.Server.exe service --start ``` # LDAP Authentication Source: https://octopus.com/docs/security/authentication/ldap.md :::div{.hint} LDAP authentication can only be configured for Octopus Server, the Octopus Linux Container, and not for [Octopus Cloud](/docs/octopus-cloud/). See our [authentication provider compatibility](/docs/security/authentication/auth-provider-compatibility) section for further information. ::: Octopus provides an LDAP authentication provider allowing you to use an existing LDAP Server to authenticate with Octopus. From **Octopus 2021.2**, the LDAP Authentication provider is available out-of-the-box as one of [a number of custom Server extensions](/docs/administration/server-extensibility/customizing-an-octopus-deploy-server-extension) provided as part of the Octopus Deploy installation. It is an open-source project, and the source code is available on [GitHub](https://github.com/OctopusDeploy/LdapAuthenticationProvider). This guide will walk you through how to configure the LDAP authentication provider in Octopus Deploy. This example will enable Octopus Deploy to authenticate to the domain `devopswalker.local`. ## LDAP background LDAP, or Lightweight Directory Access Protocol, is an open, vendor-neutral, industry-standard protocol for interacting with directory servers. It is easy to confuse LDAP with a directory server such as Active Directory. LDAP itself is not a directory server. It is the protocol used to communicate with a directory server. Like `http` is the protocol for web servers, or `wss` is the protocol to communicate with web servers via sockets. The default configuration for Active Directory enables LDAP support. As LDAP is a protocol, not a directory server, it has these advantages: 1. You can leverage non-Microsoft directory servers such as OpenLDAP along with proprietary directory servers such as Active Directory. 2. Docker containers, such as the Octopus Linux Container, can use LDAP to authenticate to Active Directory or OpenLDAP servers without having to worry about attaching a computer to a domain. 3. It is easier to fine-tune the lookup filters and attributes to match your requirements. ## Secure your LDAP server By default, LDAP traffic is not encrypted. By monitoring network traffic, an eavesdropper could learn your LDAP password. Before configuring the LDAP provider in Octopus Deploy, please consult the vendor documentation for your directory server for communicating over SSL or TLS. Securing an LDAP server is outside the scope of this guide. The rest of this guide assumes you have worked with your system administrators on securing your LDAP server. ## Understanding DNs In LDAP, a DN, or a [distinguished name](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Directory_structure), uniquely identifies an entry and the position in a directory information tree. Think of it as a path to a file on a file system. The domain in this example is `devopswalker.local`. Translating that to a DN LDAP can understand is `dc=devopswalker,dc=local`. All users are stored in the directory users. The DN for that is `cn=users,dc=devopswalker,dc=local`. A user account with the name `Bob Walker` has the DN `cn=Bob Walker,cn=users,dc=devopswalker,dc=local` ## What you will need Before configuring the LDAP provider, you will need the following. - The fully qualified domain name, or FQDN, of the server to query. In this example, it will be `DC01.devopswalker.local`. - The port number and security protocol to use. This example will use the standard secure LDAP port 636 for the domain controller and SSL. - The username and password of a service account that can perform user and group lookups. In this example, it will use the DN `cn=Octopus Service,cn=users,dc=devopswalker,dc=local`. - The root DN you wish to use for users and groups. This example will be `cn=users,dc=devopswalker,dc=local` as both users and groups are in the same directory on the example server. :::div{.hint} This example uses a straightforward Active Directory configuration. Your DN and FQDN might be much more complex. Please consult your system administrator for all the required configuration values. ::: ## Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ### Octopus user accounts are still required Even if you are using an external identity provider, Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned from the LDAP lookup. **How Octopus matches external identities to user accounts** You can configure the attributes to match external identities to user accounts. By default, Octopus will use `sAMAccountName` for the unique account name and `displayName` for the display name. :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ## Configuring LDAP authentication provider Navigate to **Configuration ➜ Settings ➜ LDAP**. Enter values in the following fields: - **Server**: Enter the FQDN of your server. - **Port**: Change the port (if your secure port is different from the default). - **Security Protocol**: Change to SSL or StartTLS. - **Username**: Enter the username that will be used to perform the user lookups. It can either be `[username]@[domain name]` or the user's DN. In this example it will be `cn=Octopus Service,cn=users,dc=devopswalker,dc=local`. - **User base DN**: enter the base DN for your users, which in the example is `cn=users,dc=devopswalker,dc=local`. - **Group base DN**: enter the base DN for your users, which in the example is `cn=users,dc=devopswalker,dc=local`. - **Is Enabled**: Check the check box to enable the feature. :::div{.hint} The root DN `cn=users,dc=devopswalker,dc=local` was selected because that is the directory for both users and groups in the example Active Directory server. ::: :::figure ![basic configuration for LDAP authentication provider](/docs/img/security/authentication/ldap/images/ldap-auth-provider-configuration.png) ::: ## Testing the LDAP authentication provider After configuring the LDAP authentication provider, you will want to test it. There are two easy tests you can perform without logging out/logging as a different user. - External User Lookup - External Group Lookup For the external user lookup, navigate to **Configuration ➜ Users** and select a user account. Once that screen is loaded, expand the LDAP section under logins and click the `ADD LOGIN` button. If everything is working correctly, then you will see a modal window similar to this. :::figure ![successful user lookup](/docs/img/security/authentication/ldap/images/successful-user-lookup.png) ::: If the LDAP authentication provider or LDAP server is not configured properly, you will encounter an error similar to this. :::figure ![failed user lookup](/docs/img/security/authentication/ldap/images/failed-user-lookup.png) ::: The error `Unable to connect to the LDAP server. Please see your administrator if this re-occurs. Error Code 49 Invalid Credentials` is an LDAP lookup error caused by bad credentials. That is easy to debug, but there might be a specific reason why that is failing. You can find the specific error type code by looking at your Octopus server logs. :::figure ![data error code](/docs/img/security/authentication/ldap/images/ldap-error-data.png) ::: The external group lookup is the same as the external user lookup. Except, go to **Configuration ➜ Teams** and select a team. Then click the button `ADD LDAP GROUP` and perform a search. If everything is configured correctly, then you will see this message: :::figure ![external group lookup successful](/docs/img/security/authentication/ldap/images/external-group-success.png) ::: If the lookup fails, then perform the same troubleshooting you did for the user lookup. ## Signing in After the above tests are successful, it is time to try the next test, logging into Octopus using the LDAP authentication provider. We recommend creating a test account. For this example, the test account `Professor Octopus`, was created and added to the `Developers` group. Signing in with the username `professor.octopus` worked as expected. As stated earlier, the default configuration is to match on the `sAMAccountName` attribute. Assuming the username and password were successful, the new user was created and assigned to the appropriate team. :::figure ![Successful sign in](/docs/img/security/authentication/ldap/images/new-user-created.png) ::: ## Changing the user filter By default, Octopus matches on the `sAMAccountName` attribute. In our testing, it proved to be more reliable than other options. With that default, signing in using the UPN such as `professor.octopus@devopswalker.local`, will give you this error: :::figure ![UPN Error](/docs/img/security/authentication/ldap/images/failed-sign-in.png) ::: You might have a company policy (or personal preference) to use the UPN. If so, change the User Filter to be `(&(objectClass=person)(userPrincipalName=*))`. :::figure ![Updated User Filter](/docs/img/security/authentication/ldap/images/updated-ldap-user-filter.png) ::: That is because with Active Directory, the email address is stored on the user principal, not the user id. :::figure ![user principal vs user id](/docs/img/security/authentication/ldap/images/user-id-vs-principal.png) ::: ## Troubleshooting If you encounter errors configuring the LDAP authentication provider, you can do the following steps to troubleshoot any problems. ### Take Octopus out of the equation The first recommendation is to use an LDAP lookup tool, such as [ldp.exe](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc771022(v=ws.11)) for Windows (or [ldapsearch](https://wiki.debian.org/LDAP/LDAPUtils) for Linux), to connect to your directory server over LDAP. Run that tool from the same server hosting Octopus Deploy. If that tool cannot connect, then chances are there is a firewall or some other configuration issue you'll need to fix. ### Review the logs You can find all the LDAP failures in the Octopus logs on the Octopus Server. Lookup the error codes and data codes to see what the specific error is. You can look up that specific error code using your search engine of choice to find the specific error message and a more detailed description. ![data error code](/docs/img/security/authentication/ldap/images/ldap-error-data.png) # Exposing Octopus Source: https://octopus.com/docs/security/exposing-octopus.md Your entire Octopus installation and all targets you deploy to could be contained safely within your corporate network. This is nice from a security perspective, however you may want your team to access Octopus from outside your corporate network, or you may need to deploy to servers outside your corporate network. This section will help you plan your Octopus installation and help you understand the security implications of different network topologies. ## Security and encryption We take security very seriously at Octopus Deploy and have gone to great lengths to protect your privacy and security. Learn more about [how Octopus handles security and encryption of your data](/docs/security/data-encryption). Learn more about [how Octopus communicates with Tentacle](/docs/security/octopus-tentacle-communication). We undertake routine penetration testing and security audits. These reports are available on request from our [Trust Center](https://trust.octopus.com/). ## Where to host your Octopus Server The Octopus Server is the central component of your Octopus installation. It hosts the Octopus HTTP API and the Octopus Web Portal, and is the central communication hub for deploying your applications. It needs direct access to your [SQL Server Database](/docs/administration/data) and a file store, which can be on a local disk, or a network file share. You should host your Octopus Server in the best location based on your scenario. As a general rule of thumb, you should host your Octopus Server where it has the best access to the machines where you deploy your applications, and to the users who design and perform deployments. You can choose to expose your Octopus Server to the public Internet, or you can constrain access to your corporate network. ## Inbound requests The Octopus Server hosts an HTTP API and the Octopus Web Portal which you can configure to use standard TCP ports (80/443) or non-standard ports. Your Octopus Server can also be configured to accept inbound requests from Polling Tentacles over a custom TCP port, or using WebSockets. The only inbound requests to your Octopus Server should be ones authorized by you. It could be your users, or Polling Tentacles, or services you've configured to leverage the Octopus API. ### Octopus HTTP API and Web Portal If you do not want to expose your Octopus Server to the public Internet, but want to provide remote access to users or other services, we recommend using a VPN. This will allow your remote workers to access your Octopus Server without exposing it to the public Internet directly. However, you may want to provide access for your users, or external services which leverage Octopus, and using a VPN is impractical. If you decide to expose the HTTP API and Octopus Web Portal of your Octopus Server to the public Internet, here are some things you should consider: 1. Always enable HTTPS using SSL. We also recommend forcing all requests to use HTTPS, and enabling HSTS. Learn about [exposing Octopus Server over HTTPS](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https). Avoid exposing your Octopus Server via HTTP without SSL. 1. Consider how your users authenticate with your Octopus Server. You should use an authentication provider which supports multifactor authentication (MFA). Learn about [authentication providers](/docs/security/authentication). 1. Consider setting up a routine security scan of your Octopus Server using a tool of your choice. This will provide further insights into the security precautions you should take. 1. Octopus enables certain security-related HTTP headers by default, however some of them are optional. Learn about [security headers](/docs/security/http-security-headers). ### Polling Tentacles The Octopus Server communicates with the machines involved in your deployments via Tentacle or SSH, or via some other protocol depending on your specific scenario. In most cases these are outbound requests, originating from the Octopus Server. The one exception to this are Polling Tentacles, where the Tentacle initiates a request to the Octopus Server. If you are using Polling Tentacles, you will need to open your firewall to allow Polling Tentacles to access your Octopus Server via the TCP port you've configured (default is port 10943), or via WebSockets using the HTTPS binding you have configured. Learn about [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) and [proxy server support for Polling Tentacles](/docs/infrastructure/deployment-targets/proxy-support). We generally recommend using Listening Tentacles and SSH wherever practical. If you are not using Polling Tentacles you can keep that port closed on your firewall. ## Outbound requests The Octopus Server generally makes outbound requests according to your specific deployment scenarios, like sending instructions to a Listening Tentacle or SSH endpoint, or reaching out to an external web service. You should consider the security implications related to your Octopus Server and outbound requests to design a set of network restrictions which balance security and usability. Learn more about [outbound requests](/docs/security/outbound-requests). ### Proxy servers You can configure Octopus Server to make any outbound HTTP requests, and even Tentacle or SSH connections, via a proxy server offering you a greater level of control over outbound requests from the Octopus Server. Learn about [proxy server support](/docs/infrastructure/deployment-targets/proxy-support). # Creating the tenant tag set Source: https://octopus.com/docs/tenants/guides/tenants-sharing-machine-targets/creating-the-tenant-tag-set.md In this scenario, each tenant's application is hosted on one of three groups of infrastructure. We will define a [tenant tag set](/docs/tenants/tenant-tags) to represent each group. The tag set can be used to easily map tenants to the correct infrastructure. To create a tenant tag set, navigate to **Deploy ➜ Tenant Tag Sets ➜ Add Tag Set**. For this example we'll use the tag set name **Hosting Group** with 3 tags **Hosting Group 1**, **Hosting Group 2**, and **Hosting Group 3**. Previous     Next # Tenant creation Source: https://octopus.com/docs/tenants/tenant-creation.md 1. Select **Tenants** from the main navigation and click the **Add tenant** button: :::figure ![](/docs/img/shared-content/tenants/images/add-new-tenant.png) ::: 2. Select if you want to **Add blank tenant** or **Clone an existing tenant**: :::figure ![](/docs/img/shared-content/tenants/images/blank-or-clone-tenant.png) ::: 3. Enter the name you want to use for the tenant and click the **Save** button: :::figure ![](/docs/img/shared-content/tenants/images/creating-new-tenant.png) ::: Now that you've created a tenant, you can enable [tenanted deployments](/docs/tenants/tenant-creation/tenanted-deployments/) and then [connect the tenant to a project](/docs/tenants/tenant-creation/connecting-projects). :::div{.hint} It's also possible to create a tenant using the [Octopus REST API](/docs/octopus-rest-api/). Learn more in our [create a tenant](/docs/octopus-rest-api/examples/tenants/create-tenant) example. ::: ## Tenant logo \{#tenant-logo} Try adding a logo for your tenant - this will make it much easier to distinguish your tenants. You can do this by navigating to a tenant and clicking the **Settings** tab. Your tenants will likely be other businesses, and you could use their logo to help quickly identify the correct tenant. You could consider using logos based on: - Customer logos - Data center region(s) or flags - Individual tester(s) photo/avatar # Connecting projects Source: https://octopus.com/docs/tenants/tenant-creation/connecting-projects.md By connecting tenants to projects, you can control which projects will be deployed into which environments for each tenant. 1. Navigate to your tenant. 2. Click on the **CONNECT PROJECTS** button. :::figure ![](/docs/img/tenants/tenant-creation/images/multi-tenant-connect-projects.png) ::: 3. Choose the projects you want to connect to your tenant, by clicking any project in the left-hand panel of the wizard. Click the - button of a project in the right-hand panel to deselect that project. :::figure ![](/docs/img/tenants/tenant-creation/images/multi-tenant-connect-projects-dialog.png) ::: 4. Once you have selected the projects you want to connect, click **NEXT**. 5. Choose the [environments](/docs/infrastructure/environments) you want the tenant to be connected to for each project. You can select just one or two from the drop-down menu, or click **Assign all available environments** to select all available environments. :::div{.info} Not seeing the environment you want? Make sure at least one lifecycle used by your project includes that environment. ::: 6. A preview of the selected projects and environments is shown in the connection preview panel. The selected environments will be assigned to each project based on whether they are part of any lifecycle in the project. If an environment is not part of any lifecycle in the project, it will not be assigned to the project. :::figure ![](/docs/img/tenants/tenant-creation/images/multi-tenant-connect-environments.png) ::: 7. Click **CONNECT PROJECTS** You can connect each tenant to any number of projects and, for each project, any combination of environments that each project can target. This gives you the most flexibility when designing your multi-tenant deployments. - You can offer specific projects to some tenants and not to others. - You can also provide most of your tenants with a single environment while offering specific customers extra environments. For example, you could give particular customers a test/staging/acceptance environment where they can test new releases before upgrading their production environment. ## Older versions The project connection feature was updated to allow bulk selection in Octopus Deploy **2023.4**. If you are running an older version of Octopus the dialog will only allow selecting a single project at a time. # octopus account ssh create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-ssh-create.md Create a SSH Key Pair account in Octopus Deploy ```text Usage: octopus account ssh create [flags] Aliases: create, new Flags: -d, --description string A summary explaining the use of the account to other users. -D, --description-file file Read the description from file. -e, --environment stringArray The environments that are allowed to use this account. -n, --name string A short, memorable, unique name for this account. -p, --passphrase string The passphrase for the private key, if required. -K, --private-key string Path to the private key file portion of the key pair. -u, --username string The username to use when authenticating against the remote host. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account ssh create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Manually fail a task Source: https://octopus.com/docs/releases/manually-fail-a-task.md Octopus implements a queue of running background tasks. Sometimes, a task may hang, or be canceled, but never actually finish canceling. This prevents any new tasks from beginning, and new tasks will eventually appear as Timed Out. When a task is queued, you'll see a list of tasks that it is waiting on in the task summary: :::figure ![Cancel a running task](/docs/img/releases/images/cancel-tasks.png) ::: You can navigate to any of these tasks, and then click the Cancel button in the top right corner on the executing/waiting/queued task (you may need to click it twice). This will mark the blocked task as Failed and then allow your new task to proceed. # Granular Permissions Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/granular-permissions.md Kubernetes offers an RBAC system to lock down what Kubernetes objects your workloads can create and access. The Kubernetes agent supports setting a [single service account](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/permissions) for your script pods during installation, but this does not fit all use cases. If you are sharing a cluster between teams and/or environments, granular Kubernetes agent permissions can help lock down your cluster without creating a Kubernetes agent per permission set. Granular Kubernetes agent permissions works by having the cluster admin create objects on the target cluster that create links between Octopus scope and Kubernetes permissions. ## How does it work? For each namespace you are deploying to in your Kubernetes cluster, you'll create a `WorkloadServiceAccount` that specifies which spaces, projects, environments and/or tenants are allowed to act under a set of permissions. When you don't create a `WorkloadServiceAccount` with a matching scope for your deployment, the default script pod permissions configured during installation of the Kubernetes agent will be used instead. Once you've added your `WorkloadServiceAccounts`, Octopus will handle assigning permissions transparently. ## Who should use this feature Use it if you require principle of least privilege or limited access for your developers. This feature increases friction when creating new applications so we do not recommend it for all circumstances. ## How do I use it? Granular permissions uses a Kubernetes controller and custom resources to configure Kubernetes RBAC per deployment. ### Installing Octopus Permissions Controller Octopus Permissions Controller is a standalone component that is installed via Helm, much like the Kubernetes agent. The installation includes the controller itself and the `WorkloadServiceAccount` CRD. Only a single Octopus Permissions Controller is required per cluster. The below Helm command will install Octopus Permissions Controller. ```sh helm upgrade --install --atomic \ --create-namespace --namespace octopus-permissions-controller-system \ --reset-then-reuse-values \ octopus-permissions-controller \ oci://registry-1.docker.io/octopusdeploy/octopus-permissions-controller-chart ``` :::div{.info} **Pre-requisites:** - Kubernetes agent v2.28.1+ - [Cert Manager](https://cert-manager.io) ::: ### Workload Service Accounts `WorkloadServiceAccounts` can be created as you would any other Kubernetes object. Your `WorkloadServiceAccount` should be created in the namespace you will be deploying your application resources into. :::figure ![Deployed resources](/docs/img/kubernetes/targets/kubernetes-agent/granular-permissions/deployed-resources.png) ::: `Roles` and `RoleBindings` will be created in the application namespace by Octopus Permissions Controller. A linked `ServiceAccount` will be created in the Kubernetes agent namespace. :::figure ![Created resources](/docs/img/kubernetes/targets/kubernetes-agent/granular-permissions/created-resources.png) ::: When a deployment that matches the scope configured on the `WorkloadServiceAccount` starts, the created `ServiceAccount` will automatically be assigned. #### Creating Workload Service Accounts The `WorkloadServiceAccount` spec consists of two main parts; the scope and the permission set. ```yaml apiVersion: agent.octopus.com/v1beta1 kind: WorkloadServiceAccount metadata: name: sample-wsa namespace: your-application-namespace spec: scope: spaces: [default] projects: [guestbook] environments: [dev-a,dev-b] permissions: permissions: - verbs: ["*"] apiGroups: ["*"] resources: ["*"] roles: - apiGroup: rbac.authorization.k8s.io/v1 kind: Role name: your-existing-role ``` :::div{.info} For more examples and common scenarios, have a look at the [Octopus Permissions Controller repo](https://github.com/OctopusDeploy/octopus-permissions-controller/tree/main/examples) ::: ##### Scope Each `WorkloadServiceAccount` is assigned a scope with the following fields - Spaces - Projects - Environments - Tenants - Steps These fields are matched against the corresponding slug within Octopus. Each field adheres to the following rules to match: - A field that is omitted entirely is treated as a wildcard, it will match any value - A field with one or more values will match exactly to one or more slugs - Each value must be a complete slug, partial matches are not supported Each `WorkloadServiceAccount` must have at least one non-empty field. You cannot have a `WorkloadServiceAccount` that matches every scope. ##### Permissions The permissions applied for each scope can be configured a couple of ways: - Directly reference permissions on the `WorkloadServiceAccount` - Reference existing `Roles` or `ClusterRoles` #### Cluster Workload Service Accounts When non-namespaced scoped permissions are required, `ClusterWorkloadServiceAccount` are available to configure your permissions. These work the same way as `WorkloadServiceAccounts`. #### Combining WSAs Not all permissions exist in a vacuum and we don't want to repeat ourselves too much when creating `WorkloadServiceAccount` definitions. To help compose your desired permissions, `WorkloadServiceAccounts` are built additively. When a workload with a particular scope matches multiple `WorkloadServiceAccount` scopes, the permissions are combined and both applied to a single `ServiceAccount`. ### Running deployments With the Octopus Permissions Controller and your `WorkloadServiceAccounts` configured, running deployments is done exactly as before and it will seamlessly apply the appropriate `ServiceAccount` that best matches the scope of the deployment. If there are no `WorkloadServiceAccounts` that match the deployments scope, the deployment will use the default permissions configured for script pods when you installed the Kubernetes agent. :::div{.info} We recommend restricting the default permissions to be completely empty so that deployments without matching scopes will fail quickly. ::: ## Octopus Permissions Controller ### How does it work under the covers Octopus Permissions Controller is in charge of several duties: - Managing the lifecycle of `WSAs` - Creating roles, role bindings and service accounts as defined by your `WSAs` - Applying service accounts to your Kubernetes agent script pods that run your deployment workloads `ServiceAccounts` are generated ahead of time by calculating the minimum number of unique permissions combinations that are defined by your `WorkloadServiceAccounts`. At the time of creation of a new script pod, Octopus Permissions Controller acts as a mutating admission webhook controller to match the scope annotated on the script pod with a matching `ServiceAccount` (if any). For some cases (eg. health checks), the Kubernetes agent runs a workload without a specific scope and so no `ServiceAccount` is applied to the script pod by Octopus Permissions Controller. Interested in more detail? Check out the [Octopus Permissions Controller repository](https://github.com/OctopusDeploy/octopus-permissions-controller). ### Upgrades Because this component is shared between Kubernetes agents on your cluster, we have opted to separate its upgrade cycle from a single Kubernetes agent. As we deploy Octopus Permissions Controller as a Helm chart, you can use any method you wish to install new versions. Notification of new versions will be available in the connectivity page of your Kubernetes agent, as well as a command to help upgrade your existing installation. :::figure ![Permissions controller update](/docs/img/kubernetes/targets/kubernetes-agent/granular-permissions/opc-update.png) ::: ### Installing on a cluster with existing agents Octopus Permissions Controller can be installed on a cluster with existing Kubernetes agents and it will immediately start applying permissions from matching `WorkloadServiceAccounts`. It is highly recommended that you update each of your agents default script pod permissions to be more restrictive. If a matching `WorkloadServiceAccount` is found, it will correctly apply restricted permissions, but any misconfiguration that results in no matching `WorkloadServiceAccount` could result in your deployment having more permissive permissions than intended. For basic installations of the Kubernetes agent, this command will remove default permissions. ```sh helm upgrade --install --atomic \ --create-namespace --namespace ${agent_namespace} \ --reset-then-reuse-values \ --set scriptPods.serviceAccount.clusterRole.enabled="false" \ ${release_name) \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` ### Removing Octopus Permissions Controller If the Octopus Permissions Controller is no longer desired, it can be removed in two steps. - Uninstall the Helm chart from your cluster. If you installed the permissions controller with the default parameters the commands below will do this. ```sh helm uninstall --namespace octopus-permissions-controller-system octopus-permissions-controller kubectl delete namespace octopus-permissions-controller-system ``` - Update your Kubernetes agent default permissions if required. This command below will allow for unrestricted deployments. ```sh helm upgrade --install --atomic \ --create-namespace --namespace ${agent_namespace} \ --reset-then-reuse-values \ --set scriptPods.serviceAccount.clusterRole.enabled="true" \ --set scriptPods.serviceAccount.clusterRole.rules=null \ ${release_name) \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` ## Troubleshooting ### Is Octopus Permissions Controller operational? Octopus Permissions Controller will report it's status via the health check each of the Kubernetes agents on the same cluster performs. :::figure ![Permissions controller connectivity](/docs/img/kubernetes/targets/kubernetes-agent/granular-permissions/opc-connectivity.png) ::: If the permissions controller is reported as not found, try running a new health check and monitor the Octopus Permissions Controller pod logs in Kubernetes to confirm that the script pod is being discovered. ### Deployment fails during verification When using [deployment verification](/docs/kubernetes/deployment-verification) with granular permissions, your deployment may fail during the verification phase even though the resources were created successfully. This occurs because the script pod that performs the deployment also needs to read the deployed resources to verify they reached the desired state. To resolve this issue, update your `WorkloadServiceAccount` to include read permissions: - The `get` verb for parent resources (such as Deployments) - The `list` verb for child resources (such as Pods and ReplicaSets) ### Validating assigned permissions While developing your deployment processes and configuring `WorkloadServiceAccounts`, it can be easy to accidentally create an unexpected set of permissions through multiple `WorkloadServiceAccount` interactions. If you have access to the Kubernetes cluster, we recommend directly querying the service account assigned to the script pod during your deployment. For any permissions issues, you will find the service account name output in verbose logs in the Octopus deployment task. If you do not have access to query the Kubernetes cluster, we recommend you make use of the [`kubectl can-i`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_auth/kubectl_auth_can-i/) command by adding a "Run a kubectl script" step within your deployment process. :::div{.hint} To list all the permissions you have within a particular namespace, add `kubectl auth can-i --list -n ` to your script. ::: # octopus account ssh list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-ssh-list.md List SSH Key Pair accounts in Octopus Deploy ```text Usage: octopus account ssh list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account ssh list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus account token Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-token.md Manage Token accounts in Octopus Deploy ```text Usage: octopus account token [command] Available Commands: create Create a Token account help Help about any command list List Token accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account token [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account token list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus account token create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-token-create.md Create a Token account in Octopus Deploy ```text Usage: octopus account token create [flags] Aliases: create, new Flags: -d, -- string A summary explaining the use of the account to other users. -D, --description-file file Read the description from file. -e, --environment stringArray The environments that are allowed to use this account. -n, --name string A short, memorable, unique name for this account. -t, --token string The password to use to when authenticating against the remote host. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account token create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Runbook variables Source: https://octopus.com/docs/runbooks/runbook-variables.md [Getting Started - Variables](https://www.youtube.com/watch?v=Hd71uhcD61E) Octopus supports variables so that your deployment processes and runbooks can be parameterized. This allows your processes to work across your infrastructure without having to hard-code or manually update configuration settings that differ across environments, deployment targets, channels, or tenants. For instance, when you deploy software into your test environment, you may need to provide the connection string for the test database, and when you promote the release to production, you need to provide the connection string for the production database. By assigning the connections strings as variable values and scoping those values to the test and production environments, the same deployment process works for both environments. When the software is deployed to test, the test database is used, and when the software is deployed to production, the production database is used: | Name | Value | Scope | | --- | --- | --- | | database | TestSQLConnectionString | Testing | | database | ProductionSQLConnectionString | Production | You can manage the variables for your projects, by navigating to your project in the **Project** tab of the Octopus Web Portal and selecting **Variables**: ![Project variables](/docs/img/shared-content/concepts/images/variables.png) ## Variables in runbooks A project's variables are available to both the runbooks and the deployment process, and the process for consuming variables is the same (see [an example](/docs/projects/variables/getting-started/#example)). ### Variables specific to a runbook There are scenarios where a variable may be specific to a runbook, and you don't want it to be available to other runbooks or the project's deployment process (this situation is common for [prompted variables](#prompted-variables)). :::figure ![Scoping a variable to a process](/docs/img/runbooks/runbook-variables/process-scoped-variable.png) ::: Variables can be scoped to specific runbooks, or to the deployment process, by navigating to **Project ➜ Variables**, adding a new variable, and defining the scope. On the scope dialog, there is a **Processes** field, which when populated restricts the variable availability to only the selected runbooks or deployment process. ## Prompted variables in runbooks \{#prompted-variables} [Prompted variables](/docs/projects/variables/prompted-variables) can be defined for runbooks. By default, prompted variables will prompt for the value when deploying or when running a runbook. By [scoping prompted variables](#Variables-specific-to-a-runbook) to one or more processes, they can be restricted to only prompt when deploying or for specific runbooks. ## Runbooks variables in Git projects When snapshotting a Runbook in a Git project, the variables will always be taken from the default branch. The Git reference and commit that was used to create the snapshot is shown on the Runbook snapshot page. :::figure ![Screenshot of Octopus Runbook snapshot page showing variable snapshot with reference main and commit d6cff1a](/docs/img/runbooks/runbook-variables/git-variables-runbook-snapshot.png) ::: To use a different branch to snapshot variables, you will need to change the default branch for the project. # Database backups and rollbacks Source: https://octopus.com/docs/deployments/databases/common-patterns/backups-rollbacks.md A common question we get asked is, "how does Octopus Deploy handle rollbacks?" For stateless components of your application, such as Web UIs, Web APIs, and services, rollbacks are accomplished by various means. The most straightforward approach is to deploy the previous version of those components. You can also leverage more advanced patterns such as [Blue/Green, Red/Black](/docs/deployments/patterns/blue-green-deployments-with-octopus), or [Canary deployments](/docs/deployments/patterns/canary-deployments-with-octopus). For stateful components, such as a relational database, rollbacks are much more complex. This page focuses on database rollbacks. :::div{.hint} TL;DR; For stateful components, we recommend rolling forward and/or making any changes backward compatible with previous versions of your code. The risk is much lower, and it is often quicker to fix. ::: ## Database rollback pitfalls Your application's users are why rollbacks are high risk. Typically, applications aren't designed with a *read-only* or *maintenance mode* that is turned on during deployments. It is common to have users attempting to use the application during a deployment or verification. *Off-Hours* deployments are done as a way to reduce the chance that will happen. There are major pitfalls with rolling back databases: 1. Schema changes, adding a column, creating a table, updating a stored procedure, along with corresponding migration scripts, are common. Unless tested, rollback scripts will result in data loss. Thus, a backup is needed. 2. The decision to rollback will come after a successful deployment. Most, if not all, automated database deployment tooling use transactions to deploy changes, and they automatically rollback that transaction on failure. A restore of the database backup is required after the successful deployment. 3. Unless programmatically locked out, users will use the application during deployment verification. After a user changes data, any database backup taken before deployment is worthless. Rolling back to a database backup will result in data loss. :::div{.warning} A database backup has a very limited useful rollback lifespan. ::: Rolling back changed data will require extensive analysis and testing. As such, there cannot be an automated rollback process. There are too many what-if scenarios, and risk exponentially increases as more records are changed. As long as the application continues to run, the data will continue to change. Any rollback scripts to move data around will have to keep hitting a moving target. :::div{.hint} Prior to upgrading the Octopus Server we recommend putting your server into [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). When in maintenance mode, only Octopus Administrators can kick off deployments. This allows Octopus Administrators to test the upgrade without users changing data. If anything goes wrong, a rollback can happen as the data changed was only test data. ::: ## Making database changes backwards compatible Making database changes backward compatible is often the first step towards advanced deployment patterns such as blue/green, red/black, or canary. In a nutshell, you will have two versions of code pointing to the same database. Many books exist on this subject; trying to distill it all down to a single section would be impossible. Some of the more common strategies for relational databases include: 1. Following the [expand/contract or parallel](https://www.martinfowler.com/bliki/ParallelChange.html) pattern when making changes. 2. All new columns are added as nullable. 3. Stored procedures are versioned or have parameters added with default values. 4. Relying on column names instead of column order in any code when performing queries. 5. Writing the code with the assumption any new columns will be null. For example, moving a column from TableA to TableB would involve: 1. Add a new column to TableB as nullable. 2. Update the code to first pull from TableB; if not exists, then pull from TableA. 3. Update the code to save to both TableA and TableB. 4. Deploy the database changes and updated code. 5. Finish migrating all data from TableA to TableB. 6. Update the code to only save to TableB. 7. Add the suffix _ToRemove[Date] to the column in TableA. 8. Deploy the updated code and database. 9. Delete the column from TableA. 10. Deploy the updated database. As you can see, making database changes backward compatible involves a disciplined and systematic approach. The advantage to this is you can deploy your database changes independently of your code changes. Because the database works with two (or more) versions of the code, rolling back any code is a trivial task. Some of our customers who have adopted this approach deploy their database changes several days before the code. ## Database backup use cases As stated earlier, most, if not all, rollback decisions occur _after_ the database changes have been deployed. Database tooling wraps changes in transactions that are rolled back automatically on failure. This meaning all changes are deployed or none of the changes are deployed. Although database backups have a limited lifespan for rollbacks, they can still be useful in other use cases: 1. Backup the testing or QA database for developers after a deployment to restore to their local instances. 2. Backup prior to deploying a significant release to a *Production-Like* environment. If a failure occurs, it will be easier to test the fix on a known state of data. 3. Periodically backup data and store in a secure location disaster recovery. 4. Backup a test database to spin up a new instance to test a feature branch. ## Backup recommendations Databases often contain personally identifiable (PII) data, along with credit card data or health care data. It impossible for us to be experts in every law and regulation. As such, this section will only provide rules of thumb or recommendations for database backup recommendations. To ensure you are in compliance with all laws and regulations, please consult legal and security experts in your jurisdiction. 1. Use a designated backup service account to perform backups. That backup service account is different than the deployment service account. 2. At the very least, use a different backup service account per environment. Ideally, use a different account per database per environment to reduce the attack surface area. 3. Store database backups in a secure file location. Only the backup service account should have access to that file location. 4. If you are storing credentials (username/password) in Octopus Deploy, mark the values as [sensitive](/docs/projects/variables/sensitive-variables). Sensitive variables are write-only through the Octopus Deploy API. The only time they are decrypted is during a deployment. 5. If the database server supports it, use integrated security. The Tentacles will [run as a specific user account](/docs/infrastructure/deployment-targets/tentacle/windows/running-tentacle-under-a-specific-user-account). ## Leveraging runbook for backup and restore Runbooks were added to Octopus Deploy in version: **2019.11**. Runbooks were designed for several use cases; one of them is for the backup and restore of a database. There are several advantages in using runbook over the built-in database server's built-in job functionality: 1. Visibility. The status of a backup and restore can be seen by anyone with an Octopus Deploy login. 2. Reduced access to the database. Fewer people need to log in to the database to check on the status of a job. 3. Auditing. Everything about the runbook, be it an update to the process or a run, is audited. No more guesswork as to who last changed a job. 4. More complex processes, A runbook contains 1 to N steps, with the ability to disable/enable based on environment or via the use of a variable. 5. One process across all environments. Each environment has its own database server, each with its own set of jobs that may or may not run. The same runbook can be applied to all environments. # Deploy to SQL Server using Redgate SQL change automation Source: https://octopus.com/docs/deployments/databases/sql-server/redgate.md [Redgate's SQL change automation](https://www.red-gate.com/products/sql-development/sql-change-automation/) is one of many database deployment tooling Octopus Deploy integrates with. This guide walks through configuring Octopus Deploy to leverage Redgate's SQL change automation. In addition to Octopus Deploy, the following items are required. This guide provides examples using Azure DevOps and TeamCity as the CI too, however, the core concepts are the same with all the tools. - Redgate SQL Toolbelt: - [14-day free trial](https://www.red-gate.com/dynamic/products/sql-development/sql-toolbelt/download) - CI Tool (pick one): - [Jenkins](https://www.jenkins.io/download/) - [TeamCity](https://www.jetbrains.com/teamcity/download/) - [Azure DevOps Server](https://azure.microsoft.com/en-us/services/devops/server/) - [Azure DevOps](https://go.microsoft.com/fwlink/?LinkId=2014881) - [Bamboo](https://www.atlassian.com/software/bamboo/download) - SQL Server Management Studio (SSMS): - [Free download](https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms) - SQL Server (pick one): - [SQL Express](https://www.microsoft.com/en-us/sql-server/sql-server-editions-express) - [SQL Developer](https://www.microsoft.com/en-us/sql-server/sql-server-downloads) ## Octopus Deploy preparation work The following preparation needs to be completed prior to creating and configuring projects in Octopus Deploy: 1. [Configure a worker pool](#configure-a-worker-pool) for Redgate SQL change automation to run on. 2. Install a [Tentacle on a Windows VM](#install-the-tentacle-on-a-windows-server). 3. Install the **Redgate** [step template](#install-step-templates). ### Configure a worker pool This documentation assumes a Windows VM already has the Tentacle installed on it. This guide will start with the worker pool creation and how to register that Tentacle as a worker. 1. To configure a worker pool in the Octopus Web portal, go to **Infrastructure ➜ Worker Pools**, and click **Add Worker Pool**. 2. When the modal window appears, enter a name, and if you see the **Static** and **Dynamic** options, select **Static** as the worker pool type: :::figure ![Create worker pool modal](/docs/img/deployments/databases/sql-server/images/redgate-octopus-create-worker-pool-modal.png) ::: 3. Next, add the VM the Tentacle was installed on by clicking **Add Worker**. 4. Select **Windows** and the Tentacle communication mode you plan to use. It is up to you on which communication mode the worker will use. There are pros and cons to each mode: :::figure ![Tentacle communication mode selection in the Octopus Web Portal](/docs/img/deployments/databases/sql-server/images/redgate-octopus-create-worker-select-tentacle-type.png) ::: ### Install the Tentacle on a Windows Server {#install-the-tentacle-on-a-windows-server} Next, install the Tentacle on a Windows Server. Aside from the latest version of .NET, no other software is required. The Redgate tooling will be automatically downloaded during the deployment. :::div{.info} The server needs to access the PowerShell gallery to download the Redgate tooling. ::: #### Listening Tentacles Use the Octopus Web Portal to register a Listening Tentacle. You will need to download the Tentacle onto the server and select Listening as the communication mode. Follow the wizard. The thumbprint for the server for this form can be found on the add worker screen or in **Configuration ➜ Thumbprint**. This is the thumbprint of the server's certificate. The server and the Tentacle will exchange the certificates to ensure a two-way trust is established. :::div{.info} The thumbprint in this screenshot is from a sample instance of Octopus Deploy. Your thumbprint will be different. ::: :::figure ![Example thumbprint for a Listening Tentacle](/docs/img/deployments/databases/sql-server/images/listening-tentacle-thumbprint.png) ::: After the Tentacle is configured, enter in the IP address or the host name. :::div{.info} If you enter the host name of a private server, the Octopus Server will need to connect to your DNS server to find that host. ::: By default, the Listening Tentacle will listen on port `10933`. If you configured a different port, enter the port on this form, and click **Next**. :::figure ![The screen to create a Listening Tentacle](/docs/img/deployments/databases/sql-server/images/redgate-octopus-create-listening-worker.png) ::: The Octopus Server will attempt to connect to the Tentacle. The listening Tentacle will only accept a connection if it the server's thumbprint matches. After the communication is successful, provide a display name for the worker. Depending on the screen this wizard was started from, the worker pool may or may not be pre-populated. Click **Save** to save the worker to the database. #### Polling Tentacles The process to register Polling Tentacles as workers takes place in the **Tentacle Manager** on the server hosting the Tentacle. Select the polling Tentacle to get started with the wizard. On the credentials screen enter a username and password or the [API key](/docs/octopus-rest-api/how-to-create-an-api-key) of a user who has permissions to add worker pools. This account will only be used for registration. :::div{.info} The registration process will connect to the RESTful API of the Octopus Server. It will connect over port 80 or 443 using the http/https protocol. After registration the default port the Tentacle will connect to is port 10943. ::: After the credentials have been verified, select the worker option on the next screen. :::div{.info} Under the covers, there is nothing different between a worker and a target. They are both Tentacles. The difference is in how the Tentacle is registered with Octopus. The Octopus Server treats workers differently than targets. ::: Select the space, give the worker a display name, and select the worker pool. :::figure ![The screen to register a Polling Tentacle](/docs/img/deployments/databases/sql-server/images/tentacle-manager-register-polling-tentacle.png) ::: Press **Install** to create the Tentacle and register it with the Octopus Server. ### Install step templates {#install-step-templates} For this guide the following step templates will be used: - [Redgate - Create Database Release (Worker Friendly)](https://library.octopus.com/step-templates/47d29b57-5bca-4205-ac62-ce10cdf8bab9/actiontemplate-redgate-create-database-release-(worker-friendly)) - [Redgate - Deploy from Database Release (Worker Friendly)](https://library.octopus.com/step-templates/adf9a009-8bbb-4b82-8f3b-6fb12ef4ba18/actiontemplate-redgate-deploy-from-database-release-(worker-friendly)) To install the steps from the library, navigate to **Deploy ➜ Manage ➜ Step Templates** and click **Browse**. The list of categories is alphabetical. Find the **Redgate** category, and select the first template, **Redgate - Create Database Release (Worker Friendly)**. :::figure ![Selecting the Redgate step template](/docs/img/deployments/databases/sql-server/images/redgate-select-steptemplate.png) ::: Repeat the same process for **Redgate - Deploy Database Release (Worker Friendly)**. :::div{.info} The non-worker friendly version of these step templates are there for customers using a version of Octopus Deploy older than **2019.10.0**. That version added the ability to provide a package variable in a step template. ::: ## Build Server A build server, such as Jenkins, TeamCity, Azure DevOps, Bamboo, Bitbucket Pipelines, CircleCI, or GitHub actions is required. A link to a number of build tools were provided at the start of this guide. The build server will take the database which was saved to source control and create a .NuGet package for Octopus Deploy to consume. Octopus Deploy and Redgate provide a number of plugins for several build servers. - Jenkins: - [Octopus plugin](https://plugins.jenkins.io/octopusdeploy/). - [Redgate plugin](https://plugins.jenkins.io/redgate-sql-ci/). - TeamCity: - [Octopus plugin](https://plugins.jetbrains.com/plugin/9038-octopus-deploy-integration). - [Redgate plugin](https://www.red-gate.com/dlmas/TeamCity-download). - Azure DevOps: - [Octopus plugin](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks). - [Redgate plugin](https://marketplace.visualstudio.com/items?itemName=redgatesoftware.redgateDlmAutomationBuild). - Bamboo: - [Octopus plugin](https://marketplace.atlassian.com/apps/1217235/octopus-deploy-bamboo-add-on?hosting=server&tab=overview). - [Redgate plugin](https://marketplace.atlassian.com/apps/1213347/redgate-dlm-automation-for-bamboo?hosting=server&tab=overview). ### Azure DevOps In Azure DevOps there are three steps to this process. The first step builds the database package from source control. The plugin provided by Redgate offers multiple operations, but for this step, select **Build a SQL Source Control project**. The sub folder path is a relative path. It needs to be the same directory configured in SQL Source Control. Finally, configure the build number, we recommend specifying the build number using a **SemVer** versioning strategy. :::figure ![Build step in Azure DevOps](/docs/img/deployments/databases/sql-server/images/azure-devops-build-database-package.png) ::: The push package to Octopus step can be a little tricky. The folder where the package is saved is not very apparent in the previous step. In this example, the package was saved in `$(Build.Repository.LocalPath)`. The full path for this example is: ``` $(Build.Repository.LocalPath)\RandomQuotes-SQLChangeAutomation.1.0.$(Build.BuildNumber).nupkg ``` The Octopus Server must be configured in Azure DevOps. The steps to do that are detailed in [this documentation](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/#add-a-connection-to-octopus-deploy). The last step is to create a release in Octopus Deploy and deploy it to dev using the plugin. Select the project from the drop-down list, and enter the same build number as the package. Expand the **Deployment** section and select an environment to deploy to. Clicking _Show Deployment Progress_ will stop the build and force it to wait for Octopus to complete. :::figure ![The release step in Azure DevOps](/docs/img/deployments/databases/sql-server/images/azure-devops-create-octopus-database-release.png) ::: ### TeamCity The TeamCity setup is very similar to the Azure DevOps. Only three steps are needed. :::figure ![Build step overview in TeamCity](/docs/img/deployments/databases/sql-server/images/teamcity-build-sql-automation-overview.png) ::: The first step is the build database package step. This step has similar options to Azure DevOps; provide the folder where the database is stored as well as the package version: :::figure ![TeamCity build database package step](/docs/img/deployments/databases/sql-server/images/teamcity-redgate-build-database.png) ::: The package version only appears in the advanced options, and not setting it could result in `Invalid package version number` errors: :::figure ![TeamCity build database package step advance options](/docs/img/deployments/databases/sql-server/images/teamcity-redgate-build-advanced-options.png) ::: The publish package step requires all three of the options to be populated. By default, the Redgate tool will create the NuGet package in the root working directory. :::figure ![TeamCity Create and Push package step](/docs/img/deployments/databases/sql-server/images/teamcity-publish-package.png) ::: The final step is creating and deploying the release. Very similar to before, provide the name of the project, the release number and the environment to deploy to: :::figure ![The release step in TeamCity](/docs/img/deployments/databases/sql-server/images/teamcity-create-database-release.png) ::: ## Create and configure the Octopus Deploy project This guide will follow the [manual approvals process](/docs/deployments/databases/common-patterns/manual-approvals). The deployment process will be: 1. Create delta script using Redgate's tooling. 2. In **Staging** and **Production** notify DBAs of pending script. 3. In **Staging** and **Production** pause for manual approval of delta script. 4. Run delta script using Redgate's tooling. 5. Notify the team of the deployment status. 6. On failure, page the DBAs. In Octopus Deploy, that process will look like the following screenshot. This example uses **Slack** as the notification technology. Octopus Deploy supports a number of different mechanisms to notify users, including email, Slack, Microsoft Teams, and Twilio to name a few. :::figure ![Deployment process overview in Octopus Deploy](/docs/img/deployments/databases/sql-server/images/redgate-octopus-deploy-deployment-process-overview.png) ::: Before adding steps to the process, a number of variables need to be created. We recommend namespacing the variables using [ProjectName].[Component].[Subcomponent]. - **Project.Database.Name**: The name of the database on the SQL Server to deploy to. - **Project.Database.Password**: The password of the user account who has permissions to deploy. This is not required if you're using integrated security. - **Project.Database.Server**: The SQL Server name or IP address to deploy to. - **Project.Database.UserName**: The username of the user account who has permissions to deploy. This is not required if you're using integrated security. - **Project.Redgate.ExportPath**: Where the tooling will create and export the database release to. Because this process uses workers, you need to save the files to a file share (or have one worker). :::figure ![Variables in the Octopus Web Portal](/docs/img/deployments/databases/sql-server/images/redgate-octopus-deploy-variables.png) ::: The first step in the deployment process, **Redgate - Create Database Release** will compare what is in the NuGet package and generate a delta script. Only the highlighted parameters are required. :::figure ![Create Database Release screen](/docs/img/deployments/databases/sql-server/images/redgate-octopus-create-database-release.png) ::: Configuring the notification step is dependent on the choice of technology. That isn't covered for the guide. For the manual intervention step, provide instructions, as well as the teams allowed to approve this release. :::div{.info} The choice of two teams in this example was intentional. The DBAs are the ones who should approve it. The **Octopus Manager** team is there in the event of an emergency and the **Octopus Manager** needs to fix it. ::: :::figure ![Octopus manual intervention step](/docs/img/deployments/databases/sql-server/images/redgate-octopus-manual-intervention-step.png) ::: The final step for this guide is **Redgate - Deploy Database Release**. It takes the delta script created in the first step and runs it on the specified server. The number of options on this step are limited compared to the create release step. :::figure ![Deploy from database release step](/docs/img/deployments/databases/sql-server/images/octopus-redgate-deploy-database-release.png) ::: ## Working example An example of this process has been configured on the Octopus [samples instance](https://samples.octopus.app/app#/Spaces-106/projects/redgate-simple-example/deployments/process). # Stage package uploads Source: https://octopus.com/docs/deployments/packages/stage-package-uploads.md To reduce downtime, Octopus always uploads all packages before installing any of them. For example, given a deployment process that looks like this: - Run a script - Deploy package A - Deploy package B - Run another script - Deploy package C When the deployment runs, Octopus will insert an "Acquire" step to execute as part of the deployment process, before the first step that depends on packages: - Run a script - **Acquire packages** - Deploy package A - Deploy package B - Run another script - Deploy package C During the acquire packages stage, Octopus will upload all NuGet packages used in the deployment to all servers. We do this because package uploads can be time-consuming, so we want to minimize the downtime between installing packages A and B in this example. If you have a small window for downtime, you might like to **pre-stage** your packages. An easy way to do this is to use a [manual intervention step](/docs/projects/built-in-step-templates/manual-intervention-and-approvals). The deployment process would become: - **Acquire packages** - [Manual intervention step](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) - Deploy package A - Deploy package B - Run another script - Deploy package C Effectively, this will upload all packages, and then pause the deployment until you are ready to proceed. When your downtime window arrives, you can then click Proceed, and have the deployment continue. When configuring your manual intervention step, take note: - Under the **Package Requirements** section, select **After package acquisition**. ![](/docs/img/deployments/packages/images/package-acquisition.png) ## Learn more - [Transferring packages with a separate environment](/docs/deployments/patterns/transferring-packages-before-deployment/transferring-with-environment). - [Transferring packages with a separate project](/docs/deployments/patterns/transferring-packages-before-deployment/transferring-with-project). - [Wait for package acquisition with a manual intervention](/docs/deployments/packages/stage-package-uploads). # Kubernetes Monitor Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor.md The Kubernetes monitor is a component that runs alongside Tentacle in the cluster. The Kubernetes monitor tracks the health of resources deployed to the cluster via Octopus Server. ## How it works The Kubernetes monitor communicates with Octopus Server over gRPC on a new port (8443) to send back object information to Octopus Deploy. Communications are initiated by the Kubernetes monitor, so no endpoints on the Kubernetes cluster need to be exposed. The monitor process uses the [Argo project gitops engine project](https://github.com/argoproj/gitops-engine) to internally keep track of the resources running on your cluster and react to changes as they occur. ## Required Kubernetes permissions ### Registration During registration, the Kubernetes monitor manages a secret to store it's authentication information. To do so, a `Role` is created with the `get`, `list`, `create` and `update` verbs for the `secrets` resource. Once registered, this `Role` is deleted. ### Normal operation Once the monitor is registered, the Kubernetes monitor is a read only entity. To enabled this a `ClusterRole` is created for use by the Kubernetes monitor with the `get`, `watch` and `list` verbs for all groups and resources. ## Upgrading The Kubernetes monitor's upgrade process is directly tied to the Kubernetes agent. See [how upgrades work for the Kubernetes agent here](/docs/kubernetes/targets/kubernetes-agent/upgrading) ## Troubleshooting See [Kubernetes Live Object Status troubleshooting](/docs/kubernetes/live-object-status/troubleshooting) # octopus account token list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-token-list.md List Token accounts in Octopus Deploy ```text Usage: octopus account token list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account token list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Windows Services Source: https://octopus.com/docs/projects/steps/configuration-features/windows-services.md The Windows Services feature is one of the [configuration features](/docs/projects/steps/configuration-features/) you can enable as you define the [steps](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process). The **Windows Service** feature is available on **deploy a package** steps, however, there is also a **Deploy a Windows Service** step which let you install, reconfigure, and start Windows Services during deployment. See [Windows Services](/docs/deployments/windows/windows-services) for more details. # Okta authentication Source: https://octopus.com/docs/security/authentication/okta-authentication.md Authentication using [Okta](https://www.okta.com/), a cloud-based identity management service. To use Okta authentication with Octopus you will need to: 1. Configure Okta to trust your Octopus Deploy instance (by setting it up as an app in Okta). 2. Configure your Octopus Deploy instance to trust and use Okta as an Identity Provider. ## Configure Okta The first steps are to configure Okta to trust your instance of Octopus Deploy by configuring an app in your Okta account. ### Configure an app You must first have an account at [Okta](https://www.okta.com/). You can sign up for a free [developer account](https://developer.okta.com/signup/). Once you have an account, log in to the Okta admin portal. :::div{.hint} After signing up to Okta you will receive your own url to access the Okta portal. For a developer account, it will look something similar to: `https://dev-xxxxxx-admin.okta.com`. ::: 1. Select the Applications tab and click the **Create App Integration** button. ![](/docs/img/security/authentication/okta/okta-add-app.png) 2. Choose **Web** for the **OIDC - OpenID Connect** for the **Sign-in method** and **Web Application** for the **Application type** and click the **Next** button. ![](/docs/img/security/authentication/okta/okta-new-app-integration.png) 3. Enter an **App integration name** like Octopus Deploy and for the **Sign-in redirect URIs** enter `https://octopus.example.com/api/users/authenticatedToken/Okta` replacing `https://octopus.example.com` with the public URL of your Octopus Server, remove any default **Sign-out redirect URIs** and click the **Save** button. ![](/docs/img/security/authentication/okta/okta-create-openid-integration.png) :::div{.hint} **Tips:** - **Reply URLs are case-sensitive** - Please take care when adding this URL. They are **case-sensitive** and can be sensitive to trailing **slash** characters. - **Not using SSL?** - We highly recommend using SSL, but we know its not always possible. You can use `http` if you do not have SSL enabled on your Octopus Server. Please beware of the security implications in accepting a security token over an insecure channel. Octopus integrates with [Let's Encrypt](/docs/security/exposing-octopus/lets-encrypt-integration) making it easier to setup SSL on your Octopus Server. ::: If you want to allow users to log in directly from Okta then change the **Login initiated by** to _Either Okta or App_, set **Login flow** to _Redirect to app to initiate login_, and set the **Initiate login URI** to `https://octopus.example.com/#/users/sign-in`. ![](/docs/img/security/authentication/okta/okta-initiate-login.png) :::div{.warning} Support for OAuth code flow with PKCE was introduced in **Octopus 2022.2.4498**. If you are using a version older than this you will also need to select the **Implicit (hybrid)** grant type. ::: ### OpenID Connect settings There are two values you will need from the Okta configuration to complete the Octopus configuration: the **Client ID** and **Issuer**. (The Client ID is also referred to as Audience.) Select the **Sign On** tab and scroll down to the **OpenID Connect ID Token** section. Take note of the **Issuer** and **Audience** as you will need both these values to configure your Octopus Server. :::figure ![](/docs/img/security/authentication/okta/okta-openid-token.png) ::: #### Okta group integration \{#okta-groups} If you want Okta groups to flow through to Octopus, you'll need to change the _Groups claim_ fields as follows: :::figure ![Okta Groups claim](/docs/img/security/authentication/okta/okta-groups-claim-type.png) ::: Note that the Regex is ```.*```, the period is important! #### Active Directory integration If you want to configure Okta to present your active directory groups, use a similar approach to [Okta group integration](#okta-groups), however we recommend that you use a _Groups claim type_ of **Expression** instead of **Filter**. For example, if you have all of your Octopus groups prefixed with `Octopus.` (e.g. `Octopus.Admins`, `Octopus.Deployers`, etc.) then you could use the following expression to pass along the correct AD groups: ``` Groups.startsWith("active_directory", "Octopus.", 10) ``` This expression will search `active_directory` for any groups that start with the name `Octopus.` and only return the first 10 results. > A complete guide to Okta's group expressions are available [here](https://developer.okta.com/docs/guides/customize-tokens-dynamic/main/#add-a-groups-claim-with-a-dynamic-allow-list) ### Assign app Next you will need to assign your app to people or groups within your Okta directory. 1. Select the **Assignments** tab and click the **Assign** button. You can assign your app to people, and to groups. ![](/docs/img/security/authentication/okta/okta-assign-app.png) 2. The app may be assigned to **Everyone** by default. If not, to assign the app to all users, you can simply assign the default **Everyone** group to the app, and click **Done**. ## Configure Octopus Server You will need the **Client ID** (aka **Audience**), **Client secret** and **Issuer** obtained from the Okta portal as described above. :::div{.hint} Support for OAuth code flow with PKCE was introduced in **Octopus 2022.2.4498**. If you are using a version older than this, the **Client secret** setting is not required. ::: To configure Octopus to use Okta authentication you'll need: - The **Client ID**, which should be a string value like `0a4b------------9yc3`. - The **Client secret**, which should be a string value like `uJ----------------------------------SS`. - The **Issuer**, which should be a URL like `https://dev-xxxxxx.oktapreview.com`. Once you have those values, run the following from a command prompt in the folder where you installed Octopus Server: ```powershell Octopus.Server.exe configure --OktaIsEnabled=true --OktaIssuer=Issuer --OktaClientId=ClientID --OktaClientSecret=ClientSecret # e.g. # Octopus.Server.exe configure --OktaIsEnabled=true --OktaIssuer=https://dev-xxxxxx.oktapreview.com --OktaClientId=0a4b------------9yc3 --OktaClientSecret=uJ----------------------------------SS ``` Alternatively these settings can be defined through the user interface by selecting **Configuration ➜ Settings ➜ Okta** and populating the fields `Issuer`, `ClientId`, `ClientSecret` and `IsEnabled`. :::figure ![Settings](/docs/img/security/authentication/okta/okta-settings.png) ::: :::div{.hint} The request to Okta from Octopus will need to include the required scopes. See the [Inspect the request to Okta for scope](#inspect-request) section for information about how to inspect the scope of the current request. ::: Run the command below as an Administrator to configure the scopes OpenId, Profile, Email, and Groups: ``` octopus.server.exe configure --oktaScope="openid%20profile%20email%20groups" ``` ### Octopus user accounts are still required Octopus still requires a [user account](/docs/security/users-and-teams/) so you can assign those people to Octopus teams and subsequently grant permissions to Octopus resources. Octopus will automatically create a [user account](/docs/security/users-and-teams) based on the profile information returned in the security token, which includes an **Identifier**, **Name**, and **Email Address**. :::div{.hint} **How Octopus matches external identities to user accounts** When the security token is returned from the external identity provider, Octopus looks for a user account with a matching **Identifier**. If there is no match, Octopus looks for a user account with a matching **Email Address**. If a user account is found, the External Identifier will be added to the user account for next time. If a user account is not found, Octopus will create one using the profile information in the security token. ::: :::div{.success} **Already have Octopus user accounts?** If you already have Octopus user accounts and you want to enable external authentication, simply make sure the Email Address matches in both Octopus and the external identity provider. This means your existing users will be able to sign in using an external identity provider and still belong to the same teams in Octopus. ::: ### Getting permissions If you are installing a clean instance of Octopus Deploy you will need to *seed* it with at least one admin user. This user will have access to create and configure other users as required. To add a user, execute the following command ```powershell Octopus.Server.exe admin --username USERNAME --email EMAIL ``` The most important part of this command is the email, as usernames are not necessarily included in the claims from the external providers. When the user logs in the matching logic must be able to align their user record based on the email from the external provider or they will not be granted permissions. ## Troubleshooting We do our best to log warnings to your Octopus Server log whenever possible. If you are having difficulty configuring Octopus to authenticate with Okta, be sure to check your [server logs](/docs/support/log-files) for warnings. You can also check Okta logs by clicking the **View Logs** link on the Okta admin portal. :::figure ![](/docs/img/security/authentication/okta/okta-view-logs.png) ::: ### Double- and triple-check your configuration Unfortunately security-related configuration is sensitive to everything. Make sure: - You don't have any typos or copy-paste errors. - Remember things are case-sensitive. - Remember to remove or add slash characters - they matter too! ### Check OpenID Connect metadata is working You can see the OpenID Connect metadata by going to the Issuer address in your browser adding `/.well-known/openid-configuration` to the end. In our example this would have been something like `https://dev-xxxxxx.oktapreview.com/.well-known/openid-configuration` ### Inspect the contents of the security token :::div{.warning} **Inspection of a JWT is impossible with OAuth code flow with PKCE** Please note: It's impossible to inspect the JWT within the Network tab of your browser's developer tools if you use OAuth code flow with PKCE (with a Client Secret specified in your Okta configuration in Octopus). If you'd like to use it for troubleshooting, you would need to remove the Client Secret, which would revert to Implicit flow authentication. We have plans to improve this in an upcoming version of Octopus, allowing more debug information to be visible while using PKCE. ::: Perhaps the contents of the security token sent back by Okta aren't exactly the way Octopus expected, especially certain claims which may be missing or named differently. This will usually result in the Okta user incorrectly mapping to a different Octopus User than expected. The best way to diagnose this is to inspect the JSON Web Token (JWT) which is sent from Okta to Octopus via your browser. To inspect the contents of your security token: 1. Open the Developer Tools of your browser and enable Network logging making sure the network logging is preserved across requests. 2. In Chrome Dev Tools this is called "Preserve Log". In Firefox this is called "Persist Logs". ![](/docs/img/security/authentication/images/5866122.png) 3. Attempt to sign into Octopus using Okta and find the HTTP POST coming back to your Octopus instance from Okta on a route like `/api/users/authenticatedToken/Okta`. You should see an `id_token` field in the HTTP POST body. 4. Grab the contents of the `id_token` field and paste that into [https://jwt.io/](https://jwt.io/) which will decode the token for you. ![](/docs/img/security/authentication/images/5866123.png) :::div{.hint} Don't worry if jwt.io complains about the token signature, it doesn't support RS256 which is used by Okta. ::: 5. Octopus uses most of the data to validate the token, but primarily uses the `sub`, `email` and `name` claims. If these claims are not present you will likely see unexpected behavior. 6. If you are not able to figure out what is going wrong, please send a copy of the decoded payload to our [support team](https://octopus.com/support) and let them know what behavior you are experiencing. ### Inspect the request to Okta for scope \{#inspect-request} If your request to Okta does not contain all appropriate scopes inside of it, this may result in unexpected behavior when logging into Octopus with Okta. For example, if your request does not have the groups scope, you will not get the appropriate permissions when logging in. To find out if you need to add a scope to your request, you can do the following: 1. Open the Developer Tools of your browser and enable Network logging making sure the network logging is preserved across requests. 2. In Chrome Dev Tools this is called "Preserve Log". In Firefox this is called "Persist Logs". 3. Attempt to sign into Octopus using Okta and find the HTTP GET that is sent to Okta. Inside of this request you will see request scopes. ![](/docs/img/security/authentication/images/okta-request-scope.png) 4. Within the URL request, look at the scope section. In the example above, you will see the scope includes **openid, profile, email, and groups**. These are the default scopes Octopus expects. If the scope section of your URL doesn't contain these four scopes, you will need to remediate this by remoting into the Octopus Server and running the following in a command prompt or PowerShell as an Administrator: `octopus.server.exe configure --oktaScope="openid%20profile%20email%20groups"` # Outbound requests Source: https://octopus.com/docs/security/outbound-requests.md This page describes any outbound network requests made by Octopus and Tentacle, and what information is included when Octopus checks for updates. ## Outbound requests by Tentacle For security reasons, we minimize the number of outbound requests made by the Tentacle deployment agent. The only outbound requests you should see are for: - [Certificate revocation list checking](http://en.wikipedia.org/wiki/Revocation_list), which is a security feature of .NET. - [Automatic root certificate updates](https://help.octopus.com/t/crl-ocsp-lookups-and-akamai-url-hits-from-octopus-and-tentacles/4854/3), again triggered by .NET. - NuGet package downloads (only when using the **Tentacle downloads directly from NuGet** option). - Connections back to the Octopus Server (only when Tentacle is configured in [polling mode](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles)). It's possible that scripts in your packages may make outbound requests; in this case you should take care when deploying packages created by a third party. ## Outbound requests by Octopus The Octopus Server makes the following outbound requests: 1. Pushing packages and deployment instructions, and checking the health, of Tentacles. 2. Downloading packages from the [external package repositories](/docs/packaging-applications/package-repositories) that you configure. 3. Windows Azure traffic (only when deploying to an Azure deployment target). 4. Checking for updates (if enabled). 5. Checking for updated [built-in step templates](/docs/projects/built-in-step-templates) (if enabled). 6. Checking for updated [community contributed step templates](/docs/projects/community-step-templates) (if enabled). 7. Behavioral telemetry is sent to `https://telemetry.octopus.com` (if enabled). 8. Email address and behavioral data is sent to `https://experiences.octopus.com` via In-App messaging (if enabled). 9. Requests are sent to `https://aiproxy.octopus.com` to communicate with foundation models for [AI features](/docs/octopus-ai) in Octopus Deploy. ### Built-in step templates From **Octopus 2022.1** some built-in step templates can be automatically updated. Octopus will make requests to the following URLs in order to check for and download updated versions of step templates: - `steps-feed.octopus.com` - `stepsprodpackages.blob.core.windows.net`. The infrastructure for the service that hosts the updated versions of step templates runs in Azure. ### Community contributed step templates Our community contributed step template integration queries `library.octopus.com` for updates. ## What information is included when Octopus checks for updates? By default, Octopus will periodically check for new releases. You can opt-out of checking for updates by navigating to **Configuration ➜ Settings ➜ Updates** in Octopus. When the "Check for updates" option is enabled, Octopus will make a HTTPS request to the `octopus.com` domain every 8 hours. This request includes: - The current Octopus Deploy version number that you are running. - A unique installation ID. :::div{.hint} **Microsoft Azure** The Octopus.com site is hosted on Microsoft Azure, so you will see traffic going to Azure services. ::: ## Disabling outbound requests In isolated/air-gapped scenarios without access to the internet, it may prove beneficial to disable attempts to contact these external services to prevent failed tasks and/or errors in the logs. Details on how to disable each feature are as follows: - Octopus Server updates - Via the Web Portal: **Configuration ➜ Settings ➜ Updates** - Via the CLI [configure command](/docs/octopus-rest-api/octopus.server.exe-command-line/configure): `Octopus.Server.exe configure --upgradeCheck=false` - Built-in step template updates - Via the Web Portal: **Configuration ➜ Features ➜ Step Template Updates** - Community step updates - Via the Web Portal: **Configuration ➜ Features ➜ Community Step Templates** - Telemetry - Via the Web Portal: **Configuration ➜ Telemetry** - Via the CLI [configure command](/docs/octopus-rest-api/octopus.server.exe-command-line/configure): `Octopus.Server.exe configure --sendTelemetry=false` - Dynamic Extensions - Via the CLI [configure command](/docs/octopus-rest-api/octopus.server.exe-command-line/configure): `Octopus.Server.exe configure --dynamicExtensionsEnabled=false` - In-App Messaging via Chameleon - Via the CLI [configure command](/docs/octopus-rest-api/octopus.server.exe-command-line/configure): `Octopus.Server.exe configure --experiencesEnabled=false` # Telemetry Source: https://octopus.com/docs/security/outbound-requests/telemetry.md Telemetry reporting is on by default. The data we receive helps us understand how our customers use Octopus and guides product decisions. We also collect usage patterns for the purpose of improving our user experience. Paid self-hosted customers can turn off telemetry reporting by navigating to **Configuration ➜ Telemetry** and unchecking the **Send telemetry** checkbox in the Octopus instance. When **Telemetry Reporting** is on, Octopus will make a secure HTTPS request containing the following data. | Data | Description | | ----- | ------ | | Version | The current Octopus Deploy version number that you are running. | | Installation ID | A GUID that we generate when Octopus is installed. This GUID is a way for us to get a rough idea of the number of installations that exist in the wild, and which versions people are using, so we can make decisions about backwards compatibility support. | | Telemetry payload | Configuration and usage information help us make product decisions. For example, we expected users to have only a handful of machines, but the statistics tell us that some customers have over 900; we now take that into account when designing the user experience. | Be assured that names, descriptions, URIs, and so on are *never* included. You can download a preview of the data that will be sent by clicking **Download Telemetry Preview** on the **Configuration ➜ Telemetry** page. To learn more about Octopus and data privacy, see our [GDPR page](https://octopus.com/legal/gdpr). Please consider keeping **Telemetry Reporting** on. We review the data every week, and it really does help us make Octopus a better product 💙. # Providing database performance metrics Source: https://octopus.com/docs/administration/managing-infrastructure/performance/providing-database-performance-metrics.md ## Out of the box database performance in Octopus Deploy Every user has different usage patterns of Octopus Deploy with different numbers of projects, targets, releases and packages. As a result no one database indexing strategy will provide a best fit for all installations. Users who are deploying to thousands of targets for a single project each day, will have different database performance metrics to those who have just a few Tentacles, but hundreds of projects which constantly need dashboard updates. For this reason we have been restrained in the use of indexes to a base schema and only added those that look like they will provide benefit *on average* to most users. It is entirely likely that the database usage that is seen by us during development and testing is not necessarily going to be exactly the same that you experience with your installation and for that reason you may notice a less than optimal performance profile. In much the same way that we love to get our users involved with the feature planning process, our aim with performance is to work with users to learn how the database is used in the various real-world configurations, and where appropriate integrate that knowledge into future updates with schema and code changes to ensure that everyone reaps the rewards of a faster and more efficient installation. :::div{.hint} **Can I add my own indexes?** While we generally won't stop you from adding your own indexes if you feel that would provide some performance benefits, we generally advise against this as that then leaves the database schema in a state inconsistent with the base schema generated by the installation. When we create new features or provide bug fixes, this may involve schema changes which we script based on the assumption that the database currently looks like the default schema. Additional changes to this schema may mean that the upgrade will fail to complete. If you want to add your own indexes we would recommend running the System Integrity check (available via **Configuration ➜ Diagnostics ➜ Check System Integrity**) before performing the upgrade to see what the differences are from the assumed schema. If possible, remove these indexes and feel free to recreate them once the upgrade has completed. ![](/docs/img/administration/managing-infrastructure/performance/images/5865851.png) ::: :::div{.warning} **Azure automatic indexes** Azure SQL Databases are a great way to set-up your Octopus database to be managed in the cloud. One feature that this product can provide is [automatic index management](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-advisor-portal). While this is a great way to set-up your databases and forget about them, allowing Azure to decide and act on potential performance benefits, this means that indexes will be potentially created without you being aware of them. As noted above you will need to be aware what custom indexes exist and remove them before performing an update to the Octopus Server to ensure that any new schema changes can be applied smoothly. ::: ## What you can do to help ### Missing indexes When you notice some performance problems that appear to be due to a slow database, we would love to get your database's recommendations on what indexes may be missing. Run the following query and provide the results (ideally as an attached file) in your support ticket. The query below is taken from a great blog post by Glen Berry - [Five Very Useful Index Selection Queries for SQL Server 2005](https://sqlserverperformance.wordpress.com/2007/10/12/five-very-useful-index-selection-queries-for-sql-server-2005/). **Missing indexes** ```sql SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS index_advantage, migs.last_user_seek, mid.statement as 'Database.Schema.Table', mid.equality_columns, mid.inequality_columns, mid.included_columns, migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK) INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK) ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK) ON mig.index_handle = mid.index_handle ORDER BY index_advantage DESC; ``` ### SQL Server profiler [SQL Server Profiler](https://msdn.microsoft.com/en-us/library/ms181091) is a tool that allows you to watch and record the requests that are being sent to your database, along with metrics on what it took to run that query. By reviewing all the requests being sent to the server over a given period of time, it is easier to determine if the database is acting slow, or if the Octopus Server is issuing too many, sub-optimal requests (or both!). The following steps outline one way of recording the relevant information, however there are various resources all over the web that will provide [deeper tutorials](https://www.simple-talk.com/sql/performance/how-to-identify-slow-running-queries-with-sql-profiler/) about SQL Server Profiler. 1. Launch SQL Server Profiler and create a new trace. (**File ➜ New Trace**). 2. Select the database that your Octopus Deploy database instance is located and provide login credentials. (See [here ](https://msdn.microsoft.com/en-us/library/ms187611.aspx)for details about the minimum required credentials). 3. Give the trace an appropriate name like `Octopus Deploy - Loading Project 2016-11-12`. 4. Click the `Events Selection` tab to provide filters that will be applied to the stream of data. 5. Disable `Audit Login` and `Audit Logout`. 6. Click `Column Filters` and set the ApplicationName filter to Like="Octopus %" to filter requests just sent from the Octopus Server. ![](/docs/img/administration/managing-infrastructure/performance/images/5865852.png) 7. Click Run. You will then probably see lots of entries starting to show up. This is because the server is always busy making calls to the database, checking if any new tasks needs to be run or updating the status of existing machines and tasks. Ideally we want this trace to cover just the queries that were invoked while the you perform the operation that appears to cause the server to slow down. Click the `Clear Trace Window` icon to remove the existing entries.![](/docs/img/administration/managing-infrastructure/performance/images/5865853.png). 8. Go back to the Octopus Deploy portal and perform the task that resulted in slow performance. 9. Back in SQL Server Profiler, click the red `Stop` button to prevent any more logs from being added. We want this snapshot to represent as close as possible the operations that were being performed at that point in time. 10. Save the results into a *.trc* trace file and send through with your ticket detailing what steps you ran in the portal. While this trace may not always provide conclusive proof as to the primary culprit of your performance problems, it may provide some indication as to where improvements can be made to optimize the request profile. If you are seeing error messages with a specific query in your server logs or through Octopus CLI failures, for example: > INSERT INTO dbo.[Event] WITH (TABLOCKX) (RelatedDocumentIds, ProjectId, EnvironmentId, TenantId, Category, UserId, Username, Occurred, Message, Id, Json) values (@RelatedDocumentIds, @ProjectId, @EnvironmentId, @TenantId, @Category, @UserId, @Username, @Occurred, @Message, @Id, @Json) > > Server exception: > > System.Exception: Error while executing SQL command: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. then it may be more useful to focus in on that specific query and get the execution plan that the database engine is executing. In that case follow the above steps but after step 6, when configuring the filters, include the following steps: 6. Configure filters. * With the filters dialog open, add a filter to the *Text* property that matches the table name involved. In the example above we might add the condition Like="%Event%". Click `Ok` and if the message pops up, agree to adding the `TextData` event column. * At the Events Selection tab tick the `Show all events` check-box, expand the `Performance` section, and include the `Showplan XML` event. This event will provide detailed information about how the database constructed and executed the query. ![](/docs/img/administration/managing-infrastructure/performance/images/5865854.png) As with before, perform the operation causing the error with the trace running then export and send the trace file with your ticket. ### Logging queries Slow running queries are automatically logged to the [Server Logs](/docs/support/log-files) with an Info trace level. These lines will look something like: ``` 2016-11-17 00:31:39.8557 285 INFO Reader took 309ms (1ms until the first record): SELECT * FROM dbo.[Project] ORDER BY Id ``` By updating your server logging to verbose, further information will be recorded if a large number of concurrent transactions appear to be active at any one time. ``` 2016-08-18 23:59:50.5834 2266 INFO There are a high number of transactions active. The below information may help the Octopus team diagnose the problem: Now: 2016-08-18T23:59:50 Transaction with 0 commands started at 2016-08-16T18:38:38 (192,072.09 seconds ago) Transaction with 0 commands started at 2016-08-16T18:38:38 (192,072.07 seconds ago) ``` Providing these logs in your support ticket that correlate to the times that you noticed the performance problems will further help us to diagnose what could be improved. ## Improvements going forward Providing as much information as possible regarding what actions you are performing to the server, along with the subsequent requests that the server is making, will best help us to further improve the performance of Octopus for all users. While we can't guarantee that we will be able to squeeze improvements out of every situation, every bit helps. # octopus account username Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-username.md Manage Username/Password accounts in Octopus Deploy ```text Usage: octopus account username [command] Available Commands: create Create a Username/Password account help Help about any command list List Username/Password accounts Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus account username [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account username list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Record a performance trace Source: https://octopus.com/docs/administration/managing-infrastructure/performance/record-a-performance-trace.md If you are experiencing a problem with the performance of your Octopus installation we may ask you to record a performance trace and send the recording to us for analysis. Performance analysis is most successful when you can provide us with a full picture of the problem at hand: - Recording of server metrics like CPU, RAM, Disk I/O and any other metrics which may be useful - Detailed Octopus logs. - The performance trace recording itself. ## Privacy We use [JetBrains dotTrace](https://www.jetbrains.com/profiler/) to record and analyze the performance trace. We are only concerned with which functions are called, how often they are called, and how long they take to execute. To protect your privacy we will provide you with a secure location to upload the recording, only use the recording for the performance analysis, and then delete all traces of the recording. ## Getting prepared 1. Download and install a trial of [JetBrains dotTrace](https://www.jetbrains.com/profiler/) on your Octopus Server. 2. Start recording CPU, RAM and Disk I/O using performance monitor (or similar). ## Recording the performance trace :::div{.hint} We don't usually need a long recording, the most important thing is to get a recording of a short period of time where the problem occurs. This may be during a particular deployment, or when another Octopus task is running (like retention policy processing or health checks), or perhaps it's just happening throughout the day. If we haven't asked for anything specific, start with a 1-5 minute recording so we can analyze it and go from there. ::: 1. Install dotTrace on the machine hosting Octopus Server. 2. Start dotTrace as an Administrator and start a free trial (the trial can be paused after recording the trace). 3. Start a timeline trace by [attaching to the running Octopus Server process](https://www.jetbrains.com/help/profiler/Profile_Running_Process.html). 4. When enough time has passed, take a [snapshot](https://www.jetbrains.com/help/profiler/Profiling_Guidelines__Launching_and_Controlling_the_Profiling_Process.html) using `Get Snapshot'n'Wait`. 5. Detach from the process. 6. Close dotTrace. 7. Zip the dotTrace recording, the Octopus Server logs, Task Logs for tasks running during that period of time, and server metrics or a performance chart covering that period in time. 8. Upload the zip file bundle to the secure and private share which should have been provided by an Octopus team member, then get back in touch with us - unfortunately we don't get notified of file uploads. 9. [Pause the dotTrace trial](https://www.jetbrains.com/help/profiler/Specifying_License_Information.html) when you've finished recording. ## Analysis Due to the nature and depth of these investigations it may take a little while to analyze the performance trace and get to the bottom of what's happening. ### DIY performance analysis We ship debugging symbols (PDB) files in the box with Octopus Server. This means you can use the dotTrace tooling to do your own analysis and understand exactly which functions could be causing the problem. # octopus account username create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-username-create.md Create a Username/Password account in Octopus Deploy ```text Usage: octopus account username create [flags] Flags: -d, --description string A summary explaining the use of the account to other users. -D, --description-file file Read the description from file. -e, --environment stringArray The environments that are allowed to use this account. -n, --name string A short, memorable, unique name for this account. -p, --password string The password to use to when authenticating against the remote host. -u, --username string The username to use when authenticating against the remote host. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account username create" ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # How to turn on variable logging and export the task log Source: https://octopus.com/docs/support/how-to-turn-on-variable-logging-and-export-the-task-log.md When you contact Octopus Deploy support, we may ask you to provide a variable evaluation log to help us troubleshoot your issue. This page outlines how to create and download the full verbose log. Since so many variables are used in the deployment process (with their values changing through your deployment), Octopus doesn't log all of that information by default. It would increase the size of the task logs, and would slow down your deployments. However, sometimes it's very helpful to log this information. ## Step-by-step Guide Write the variables to the deployment log 1. Open the **Project ➜ Variables** page. 2. Add the following two variables and set their value to **True**: - `OctopusPrintVariables` - `OctopusPrintEvaluatedVariables` Two sets of variables will be printed, first, the raw definitions before any substitutions have been performed, then the result of evaluating all variables for deployment. ![](/docs/img/support/images/variables.png) 3. **Create a new release** of your project for the variables to take effect. 4. Deploy the new release. 5. Open the deployment/task details, and go to the **Task log** tab. Click on the **Raw** link. You can also select the **Download** option if you want to look at this locally. You can download this and attach the log file to your support query. ![](/docs/img/support/images/rawlogs.png) 6. If you wish to troubleshoot this locally the raw log text file containing the entire deployment log will load in Octopus. ![](/docs/img/support/images/raw.png) :::div{.hint} Remember to remove these variables after you get the full log. These variables are designed for debugging purposes only. You might want to open the file in a text editor, and redact any sensitive information like hostnames or company information, before sending the log to us. ::: # Record memory snapshots Source: https://octopus.com/docs/administration/managing-infrastructure/performance/record-a-memory-trace.md If you are experiencing a problem with the memory consumption of your Octopus installation we may ask you to record some memory snapshots and send them through to us for analysis. Memory analysis is most successful when you can provide us with a full picture of the problem at hand: - Recording of server metrics like CPU, RAM, Disk I/O and any other metrics which may be useful - Detailed Octopus logs. - The exported dotMemory Workspace (containing the snapshots you've recorded). ## Privacy We use [JetBrains dotMemory](https://www.jetbrains.com/dotmemory/) to record and analyze the memory snapshots. We are only concerned with memory allocations, whether there is a memory leak, and which functions allocated the memory. To protect your privacy we will provide you with a secure location to upload the snapshots and any supporting data, only use the data you provide for the memory analysis, and then delete everything once the analysis has been completed. ## Getting prepared 1. Start recording CPU, RAM and Disk I/O using performance monitor (or similar). ## Recording the memory snapshots :::div{.hint} We usually only need one or two snapshots, the most important thing is that the snapshots cover the period of time where the problem occurs. This may be during a particular deployment, or when another Octopus task is running (like retention policy processing or health checks), or perhaps it's just happening throughout the day. If we haven't asked for anything specific, start by taking a single snapshot of your running Octopus Server so we can analyze it and go from there. ::: ### Get a snapshot from your running Octopus Server This is the best way to start, especially if you cannot restart your Octopus Server, or if the memory problem takes a long time to occur or is difficult to reproduce. It requires a small standalone executable called `dotmemory.exe` which will take snapshots of your running Octopus Server. 1. Download the [dotMemory command-line tool](https://www.jetbrains.com/dotmemory/download/#section=command-line-profiler) and extract it to a location on the Octopus Server like `C:\tools\dotmemory.exe`. 2. Open a command prompt as an Administrator (elevation is required). 3. Run: `dotmemory.exe get-snapshot Octopus.Server`. 4. Take note of the location where the dotMemory workspace file was saved (you'll need this later). 5. Use Octopus in a way which causes the memory problem. 6. Get another snapshot using the same command as before. 7. Zip the dotMemory Workspaces, the Octopus Server logs, Task Logs for tasks running during that period of time, and server metrics or a performance chart covering that period in time. 8. Upload the zip file bundle to the secure and private share which should have been provided by an Octopus team member, then get back in touch with us - unfortunately we don't get notified of file uploads. ### Start Octopus Server with dotMemory (alternative method) If you can easily restart your Octopus Server, and the problem you are experiencing is reproducible, you can start Octopus Server with dotMemory for a better result. Starting Octopus Server with dotMemory means it can record the source of the memory allocations and help us track down the root cause of any memory leaks. 1. Download the [JetBrains dotMemory application](https://www.jetbrains.com/dotmemory/) and install it on the machine hosting Octopus Server. 2. Start dotMemory **as an Administrator** and start a free trial (the trial can be paused afterwards). 3. Stop the Octopus Deploy Windows service. 4. Configure dotMemory to start your Octopus Server Windows service. ![dotMemory start Octopus Server](/docs/img/administration/managing-infrastructure/performance/images/record-a-memory-trace-start-windows-service.png). 5. If everything is working as expected you should see a screen like the one shown below ![dotMemory take snapshot](/docs/img/administration/managing-infrastructure/performance/images/record-a-memory-trace-take-snapshot.png). 6. Take a snapshot just after the Octopus Server has started. 7. Use Octopus in a way which causes the memory problem. 8. Take another snapshot. 9. Once it is safe, stop the Octopus Server process using dotMemory. 10. Start your Octopus Server normally again. 11. Export the dotMemory Workspace so you can share it with us. 12. Close dotMemory. 13. Zip the exported dotMemory Workspace, the Octopus Server logs, Task Logs for tasks running during that period of time, and server metrics or a performance chart covering that period in time. 14. Upload the zip file bundle to the secure and private share which should have been provided by an Octopus team member, then get back in touch with us - unfortunately we don't get notified of file uploads. ## Analysis Due to the nature and depth of these investigations it may take a little while to analyze the memory snapshots and get to the bottom of what's happening. ### DIY memory analysis We ship debugging symbols (PDB) files in the box with Octopus Server. This means you can use the dotMemory tooling to do your own analysis and understand the root cause of any memory problems. # octopus account username list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-account-username-list.md List Username/Password accounts in Octopus Deploy ```text Usage: octopus account username list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus account username list" ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # How to get a database backup and encrypt your Master Key Source: https://octopus.com/docs/support/get-a-database-backup-and-encrypt-the-master-key.md When you contact Octopus Deploy support, sometimes we aren't able to reproduce the issue you're experiencing. This can be due to specific circumstances in your instance, or corrupted data which we won't be able to reproduce. We may ask you to send us a database backup and your encrypted Master Key, which will allow us to accurately reproduce the issue and aid in resolving it. This guide provides a walk-through to get the best information for us to help troubleshoot the issues. 1. Create the database backup. The easiest way to import a database is to restore from a .bak file, and this is the format we will ask for. This can be produced from [SQL Server Management Studio](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/create-a-full-database-backup-sql-server). Right-click on the Octopus database, and select **Tasks ➜ Back Up...**, and select the directory where the .bak file will save to. :::figure ![Backup SQL database in SQL Server Management Studio](/docs/img/support/images/sql_server_management_studio_backup_db.png) ::: 2. Encrypt your Master Key. :::div{.hint} You can get your Master Key using [Octopus Manager](/docs/security/data-encryption#your-master-key) or by using the `show-master-key` command in [Octopus.Server.exe](/docs/octopus-rest-api/octopus.server.exe-command-line/show-master-key). ::: We have a PowerShell snippet which will encrypt your Master Key, using Public Key Cryptography so only Octopus can decrypt it. You can use this snippet to encrypt your Master Key, and when we receive it, we will decrypt it and use it to restore the database you have provided to us. ``` $octopusPublicKey = "MIIDnzCCAwigAwIBAgIJAK5yFHmnxrYxMA0GCSqGSIb3DQEBBQUAMIGSMQswCQYDVQQGEwJBVTEMMAoGA1UECBMDUUxEMREwDwYDVQQHEwhCcmlzYmFuZTEhMB8GA1UEChMYT2N0b3B1cyBEZXBsb3kgUHR5LiBMdGQuMRcwFQYDVQQDEw5PY3RvcHVzIERlcGxveTEmMCQGCSqGSIb3DQEJARYXaGVsbG9Ab 2N0b3B1c2RlcGxveS5jb20wHhcNMTQwNzI1MTE0NzI2WhcNMzIxMDA4MTE0NzI2WjCBkjELMAkGA1UEBhMCQVUxDDAKBgNVBAgTA1FMRDERMA8GA1UEBxMIQnJpc2JhbmUxITAfBgNVBAoTGE9jdG9wdXMgRGVwbG95IFB0eS4gTHRkLjEXMBUGA1UEAxMOT2N0b3B1cyBEZXBsb3kxJjAkBgkqhkiG9w0BCQ EWF2hlbGxvQG9jdG9wdXNkZXBsb3kuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDD532q7wcbDAE65sZn5kdWQEv+yFHTUn9wPXEfPztv1cc/xjLts6zuKcfcRVITyB+n02Rg/VAGpNdZeAIWTtptKLkcdttwf+xoySPF13jc7DSnYabGamRR/hqzn9QcLq87WHIQF8olecpokoTsdBfE6e3idR8 hLKKIlJgb5g5dcwIDAQABo4H6MIH3MB0GA1UdDgQWBBRYd4/ytF84FZVaSVHfhPb0Z/EYZzCBxwYDVR0jBIG/MIG8gBRYd4/ytF84FZVaSVHfhPb0Z/EYZ6GBmKSBlTCBkjELMAkGA1UEBhMCQVUxDDAKBgNVBAgTA1FMRDERMA8GA1UEBxMIQnJpc2JhbmUxITAfBgNVBAoTGE9jdG9wdXMgRGVwbG95IFB0 eS4gTHRkLjEXMBUGA1UEAxMOT2N0b3B1cyBEZXBsb3kxJjAkBgkqhkiG9w0BCQEWF2hlbGxvQG9jdG9wdXNkZXBsb3kuY29tggkArnIUeafGtjEwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOBgQAcEMAykQaazLd2ZewE7d+0PeIWv/YlZMIDeg5LF1/UtKMMCaaspN7rNA1lUPfjK/ofWh43s4R0J tjlbuEtZr+HKmOGzr+wbMCRIggbu2j3GEcC5i7zeoa85olokubwO1QDVZVaELWyXnDZl1UoJ9VyGsV5pEAE571XS9oTUyUssQ==" function Encrypt-ForOctopusEyesOnly($secretMessage) { $certBytes = [System.Convert]::FromBase64String($octopusPublicKey) $x = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList @(,$certBytes) $publicKey = $x.PublicKey.Key; $plainBytes = [System.Text.Encoding]::UTF8.GetBytes($secretMessage) $encryptedBytes = $publicKey.Encrypt($plainBytes, $false); $encryptedText = [System.Convert]::ToBase64String($encryptedBytes) return $encryptedText } $message = Encrypt-ForOctopusEyesOnly "YourMasterKey" write-host $message ``` 3. Upload your database backup and encrypted Master Key. In your email or forum thread with Octopus support, we will provide you with a secure and private link to upload your database backup and the encrypted Master Key. Only we have access to view and download these files, and we will only allow upload access to you. We will also ensure your forum thread is marked as private if it hasn't already been, to ensure only you and our team can see the link. # Record a problem with Octopus Deploy in your web browser Source: https://octopus.com/docs/support/record-a-problem-with-your-browser.md If you are experiencing a problem with Octopus in your web browser we may ask you to take a recording of the HTTP traffic and the screen and send them to us for analysis. This kind of analysis is most successful when you can provide us with a full picture of the problem at hand: - What is going wrong, and what you expected to happen, in your own words. - A screen recording of the problem happening. - A recording of the HTTP traffic for the same period. - The [Octopus Server logs](/docs/support/log-files) for the same period. ## Privacy We recommend some small utilities to record your screen and web traffic. We are only concerned with analyzing the web traffic and how it may have caused the problem. To protect your privacy we will provide you with a secure location to upload the recordings, only use the recordings for the analysis, and then delete all traces of the recordings. ## Getting prepared These tools for Windows are both reputable and free. They should be installed and run on the computer you use to access your Octopus Server using your web browser: 1. Download and install [FiddlerCap](http://www.telerik.com/fiddler/fiddlercap) for web traffic recording. 1. Download and install [ScreenToGif](http://www.screentogif.com/) for screen recording. ## Recording The most important thing is to get a screen and web traffic recording of the problem occurring. If you need to do any set up to make the problem occur, please record that as well. :::div{.hint} You can usually reduce the frame rate of the screen capture tool to reduce the overall size of the recording. Usually 5 FPS is enough to show the gist of what is going wrong. ::: 1. Start recording the screen. 1. Start recording web traffic. 1. Reproduce the problem including any steps required to make the problem happen (like setting up your deployment process in a certain way). 1. Stop the recordings. 1. Zip the recordings along with any log files which may be helpful for diagnosing the problem (like [Task Logs](/docs/support/get-the-raw-output-from-a-task/) or [Octopus Server logs](/docs/support/log-files)). 1. Upload the zip file bundle to the secure and private share which should have been provided by an Octopus team member, then get back in touch with us - unfortunately we don't get notified of file uploads. ## Analysis Due to the nature and depth of these investigations it may take a little while to analyze the recording and get to the bottom of what's happening. # Deploying to Azure via a firewall Source: https://octopus.com/docs/deployments/azure/deploying-to-azure-via-a-firewall.md All the Azure steps in Octopus are executed from the VM where the Octopus Server is running. So to able to successfully deploy to the Microsoft cloud, you need to make sure your Octopus Server can reach it through the network. To check you can reach Microsoft cloud through your network, run this script on the same machine using an account with the same permissions as your Octopus Server. :::div{.info} You might need to install Azure PowerShell before running this script. For information, see [Install the Azure PowerShell module](https://docs.microsoft.com/en-us/powershell/azure/install-az-ps?view=azps-2.5.0). ::: ``` $ErrorActionPreference = "Stop" $OctopusAzureADTenantId = #Enter TenantId here $OctopusAzureSubscriptionId = #Enter SubscriptionId here $OctopusAzureADClientId = #Enter ClientId here $OctopusAzureADPassword = #Enter Secret here $OctopusAzureEnvironment = "AzureCloud" $securePassword = ConvertTo-SecureString $OctopusAzureADPassword -AsPlainText -Force $creds = New-Object System.Management.Automation.PSCredential ($OctopusAzureADClientId, $securePassword) $AzureEnvironment = Get-AzEnvironment -Name $OctopusAzureEnvironment Connect-AzAccount -Credential $creds -TenantId $OctopusAzureADTenantId -SubscriptionId $OctopusAzureSubscriptionId -Environment $AzureEnvironment -ServicePrincipal Get-AzResourceGroup ``` If everything is working as expected, you will see output showing all the Azure Resource Groups you have access to: :::figure ![Screenshot of Azure Resource Groups](/docs/img/deployments/azure/deploying-to-azure-via-a-firewall/image.png) ::: If you need to add firewall exclusions to an allow list, here are a few things to take into consideration: - Figure out which Azure Data Centers you will be targeting. - Figure out which Azure services you will be targeting in those Data Centers. - Configure an allow list from the Octopus Server to the appropriate IP Address Ranges. Download the latest list of IP Address Ranges from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=56519) (updated weekly). ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # octopus api Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-api.md Execute an authenticated GET request against the Octopus Server API and print the JSON response. ```text Usage: octopus api [flags] Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus api /api octopus api /api/spaces octopus api /api/Spaces-1/projects ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Data Migration Source: https://octopus.com/docs/administration/data/data-migration.md Octopus has a data migrator that can help with specific scenarios, such as exporting configuration for storage and audit with a source control repository and single-direction copying of projects from one Octopus Server to another. :::div{.problem} If you want to migrate data between Octopus instances, we recommend the [**Export/Import Projects** feature](/docs/projects/export-import) rather than the data migrator. ::: ## Suitable scenarios Since the [Export/Import Projects](/docs/projects/export-import) feature was added, the number of suitable scenarios for the data migrator has reduced to the following: - Copying projects and their dependencies from one Octopus Server to another periodically in a single direction where there is a single source of truth. - Wanting to exclude tenants, releases, or deployments from the migration. :::div{.hint} The same version of Octopus must be running on the source and destination servers. ::: ## Unsuitable scenarios The data migration tools are only suitable for some scenarios. In most cases, there are better tools for the job: 1. To split a single Octopus Server into multiple separate Octopus Servers in a one-time operation, use the [Export/Import Projects feature](/docs/projects/export-import). 1. To sync projects with disparate environments, tenants, lifecycles, channels, variable values, or deployment process steps, see [syncing multiple instances](/docs/administration/sync-instances) 1. To consolidate multiple Octopus Servers into a single Octopus Server, use the [Export/Import Projects feature](/docs/projects/export-import). 1. To audit your project configuration, see [configuration as code](/docs/projects/version-control). 1. To split a single space into multiple spaces, see the [Export/Import Projects feature](/docs/projects/export-import). 1. To migrate data from older versions of Octopus see [upgrading old versions of Octopus](/docs/administration/upgrading/legacy). 1. For general disaster recovery, learn about [backup and restore for your Octopus Server](/docs/administration/data/backup-and-restore). 1. To move your Octopus database to another server, see [moving your database](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-database). 1. To move your Octopus Server and database to another server, see [moving your Octopus Server and database](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-database-and-server). 1. To move your entire Octopus Server from a self-hosted installation to Octopus Cloud, see [migrating from self-hosted to Octopus Cloud](/docs/octopus-cloud/migrations). :::div{.problem} **Unsupported scenarios** Sometimes, using the data migration tool may seem like it could solve a problem, but in fact, it will make things worse. Here are some scenarios we've seen that are explicitly not supported. 1. **Export ➜ Modify ➜ Import** Unfortunately, since the import isn't running the same validation checks as the API, using an **export ➜ modify ➜ import** can modify your data in such a way that is invalid for the API. Some scenarios _might work_ but because, at this point, you're effectively hand-editing your data, this isn't something we support. ::: ## Tips 1. Data migration is an advanced topic. You should take time to understand the tools, what they can do, and what they are unsuitable for. 1. You cannot migrate data between different versions of Octopus Server - they must be exactly the same version to ensure the integrity of the data. 1. The data migration tools are optimized for one-time operations or for flowing data in a single direction with a single source of truth. 1. The data migration tools generally overwrite data in the target server without merging. 1. Data is matched by **name** instead of ID. 1. There are no built-in conflict resolution tools. 1. You can commit the exported files to source control and track changes over time. 1. Treat data migration as an **offline** operation. You want to avoid exporting changing data or importing over the top of changing data. 1. Perform a backup before importing data. 1. All changes made during data import are batched into a single SQL transaction. The entire import will succeed or roll back as a batch. ## The Basics ### Exporting {#exporting} :::div{.hint} It's a good idea to ensure that your Octopus Server doesn't change data while exporting. Learn about making your Octopus Server read-only using [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). ::: You can export data using the Export Wizard built into the Octopus Server Manager or the command-line interface `Octopus.Migrator.exe`. You can export your entire Octopus Server configuration or certain projects and their dependencies. The wizard is a good way to get started, but the complete feature set is only available using the command-line interface. :::figure ![The Octopus Export Wizard is accessed via the Octopus Manager, Export data... option](/docs/img/administration/data/images/octopus-manager-export-data-wizard.png) ::: We have made the exported file structure predictable and easy to navigate. :::figure ![The export file uses human readable JSON](/docs/img/administration/data/images/json-format.png) ::: ### Importing {#importing} :::div{.hint} It's a good idea to [perform a backup](/docs/administration/data/backup-and-restore) before attempting an import. ::: You can import data using the Import Wizard built into the Octopus Server Manager or the command-line interface `Octopus.Migrator.exe import`. Similarly to exporting data, the wizard is a good way to get started, but the complete feature set is only available using the command-line interface. :::figure ![The Octopus Import Wizard is accessed via the Octopus Manager, Import data... option](/docs/img/administration/data/images/octopus-manager-import-data-wizard.png) ::: You'll get a chance to preview the changes first, and you can tell the tool to either: - Overwrite documents if they already exist in the destination (e.g., if a project with the same name already exists, overwrite it). - Skip documents if they already exist in the destination (e.g., if a project with the same name already exists, do nothing). Learn about [how conflicts are handled](#handle-conflicts). All changes are batched into a single SQL transaction; if any problems occur during the import, the transaction will be rolled back, and nothing will be imported. ## FAQ ### What is exported? {#what-is-exported} You can choose whether to export your entire Octopus Server configuration using the `Octopus.Migrator.exe export` command. Alternatively, you can export a set of projects and their dependencies using the `Octopus.Migrator.exe partial-export` command. The best way to see what is exported is to try it out and look at the resulting files. ### How do you handle sensitive values? Sensitive values are always encrypted at rest. When you export data, you will be asked to provide a password. Your secrets will be decrypted using the source server's Master Key and then re-encrypted into the exported files using the password you provided as the key. When you import data, you must provide the same password so your secrets can be decrypted from the files and imported into the target server to be encrypted using its Master Key. ### Can I track my Octopus configuration using source control? Yes, though it is an advanced scenario, you will need to take care of yourself. We are looking into other methods for defining your deployment processes as code. In the meantime, you can use this process for advanced auditing of your Octopus configuration: 1. Export your data to a known location. 1. Commit those files to source control. 1. Set up a scheduled task to repeat the process periodically, using the same encryption password each time. The JSON is as predictable as possible, so if you commit multiple exports, the only differences that will appear are the actual changes that have been made, and comparing the changes will be obvious. ### How do you match existing data? The import process matches on **names** instead of IDs. If an exported project has the same name as a project in the target server, it is considered the same. ### How are conflicts handled? {#handle-conflicts} The incoming data is viewed as the source of truth during the import, and existing documents will be overwritten. For example, when importing a project that already exists in the destination server, all deployment steps that belong to the project in the destination server are overwritten, including any new deployment steps that may have been added. :::figure ![Data in the file overwrites data in the destination](/docs/img/administration/data/images/import-overwrites.png) ::: There is no out-of-the-box way to "merge" deployment steps, or other more granular changes when importing. :::figure ![Data is not merged during the import operation](/docs/img/administration/data/images/import-doesnt-merge.png) ::: There are certain cases where we can automatically merge data, like variable sets where you have specific values that only make sense in the target server or teams where certain users only make sense in the target server. ### Can I manually resolve conflicts? There is no tooling to help you resolve conflicts in a more granular way. The data migration tooling is optimized for data flowing in a single direction with one source of truth. ### Why do the exported files contain IDs? We use the IDs to map references between documents into the correct references for the target server. We use the names to determine if something already exists. This means you can export from multiple Octopus Servers, combine them, and then import to a single Octopus Server. ### Is there a command-line interface? Yes! Most features are only available via the command line, so it is the most common way to perform data migration. Use `Octopus.Migrator.exe help` to see the full list of commands available. To see an example of the command syntax, you can use the Wizard in the Octopus Server Manager and click the **Show script** link. :::figure ![The Wizard has a show script option so you can use it to understand the command syntax](/docs/img/administration/data/images/import-wizard-show-script.png) ::: # Migration Source: https://octopus.com/docs/administration/high-availability/migrate.md [Ask Octopus Episode #14 - Configure your HA Cluster](https://www.youtube.com/watch?v=1tXVA5pyuqQ) You may already have an existing Octopus Server that you wish to make highly available. The process to migrate to Octopus High Availability is the same as the process detailed in [High Availability Implementation Guide](/docs/best-practices/self-hosted-octopus/high-availability), except your existing server will be the **first node** in the cluster. Migrating to HA will involve: 1. Moving the SQL Server Database to a dedicated SQL Server. 1. Moving all the task logs, packages, artifacts, imports, event exports etc., to a shared storage folder (BLOB data). 1. Configuring a load balancer. This guide is generic and purposely avoids mentioning specific technologies such as Azure File Storage, AWS RDS SQL Server, etc. Please see our [High Availability Implementation Guide](/docs/best-practices/self-hosted-octopus/high-availability) for more details. ## Prep Work These actions will require downtime. You can do prep work to keep the downtime to a minimum. ### Moving the database Moving the SQL Server database involves performing a backup and restore of the database. That backup and restore have to occur during the outage window. You can prepare for that by doing the following: - Provision the SQL Server Instance (if it doesn't already exist). - Create the SQL Server user Octopus will use to log into SQL Server (if it doesn't already exist). After the SQL Server has been provisioned and the user has been created, you'll want to ensure Octopus Deploy can connect to the SQL Server. It is important to do this on the server hosting Octopus Deploy with the same user the Octopus Deploy Windows Service is running as. If the Octopus Deploy Windows Service is running as a `Local System` any administrator account should work for this test. ```powershell $userName = "" $password = "" $newSQLServer = "" if ([string]::IsNullOrWhiteSpace($userName) -eq $true){ Write-Host "No username found, using integrated security" $connectionString = "Server=$newSQLServer;Database=master;integrated security=true;" } else { Write-Host "Username found, using SQL Authentication" $connectionString = "Server=$newSQLServer;Database=master;User ID=$userName;Password=$password;" } $sqlConnection = New-Object System.Data.SqlClient.SqlConnection $sqlConnection.ConnectionString = $connectionString Write-Host "Attempting to connect to $newSQLServer" $sqlConnection.Open() Write-Host "Connection successful. Closing connection." $sqlConnection.Close() ``` :::div{.hint} You can run that script using the Octopus Deploy [script console](/docs/administration/managing-infrastructure/script-console). If you are using a SQL Login, you'll want to change the user's password after your run your tests as that password will appear in the task log. ::: ### Moving BLOB data Most of the BLOB data (task logs, artifacts, packages, imports, event exports etc) stored on the file system can be copied to the new location prior to the outage window. Doing so will reduce the amount of copying you have to do during the outage windows. In addition, you can make sure your Octopus Deploy instance can use that shared location by running a test script to create and delete a file. - Provision the shared storage folder. - Use the [script console](/docs/administration/managing-infrastructure/script-console) to ensure Octopus can connect to the shared folder and create files. ```powershell $filePath = "YOUR DIRECTORY" New-Item "$filePath\file.txt" -ItemType file Remove-Item "$filepath\file.txt" ``` - Run a tool such [RoboCopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy) to copy the folder contents. An example PowerShell script using RoboCopy will be: ```powershell robocopy C:\Octopus\TaskLogs \\your-file-share\OctopusHA\TaskLogs /mir /r:5 robocopy C:\Octopus\Artifacts \\your-file-share\OctopusHA\Artifacts /mir /r:5 robocopy C:\Octopus\Packages \\your-file-share\OctopusHA\Packages /mir /r:5 robocopy C:\Octopus\Imports \\your-file-share\OctopusHA\Imports /mir /r:5 robocopy C:\Octopus\EventExports \\your-file-share\OctopusHA\EventExports /mir /r:5 ``` ### Configure load balancer Octopus Deploy must sit behind a load balancer when configured in HA mode. We recommend creating a new URL for your Octopus HA cluster. For example, if the current URL for your Octopus Instance is `octopus.your-domain.local`, the load-balanced URL could be `octopus-ha.your-domain.local`. The advantage of a new URL is: 1. You can still access each server directly (if need be). 1. The process of redirecting users and applications to the new URL should only need to be done once. 1. You can configure and test it before the outage window, along with working through any connection issues. ## Outage windows The below steps will cause an outage in Octopus Deploy. With all the prep work, the outage window should be small. If possible, we recommend making these change off-hours. In addition, you don't have to do them all in one outage window. You can move the database in one outage window, and the file system in the other outage window. During your outage window, perform the following steps (skip the sections that don't apply). 1. Ensure you have a backup of your master key. 1. Enable [Maintenance Mode](/docs/administration/managing-infrastructure/maintenance-mode) to prevent anyone from deploying or making changes during the upgrade. 1. Stop the Octopus Deploy Windows service. ### Move the database 1. Back up the database. 1. Restore the database on the new SQL Server. 1. On your Octopus Server, run the following command to update the connection string (where "VALUE" is your connection string). ```powershell Set-Location "C:\Program Files\Octopus Deploy\Octopus" & .\Octopus.Server.exe database --connectionString="VALUE" ``` ### Move the file storage 1. Run RoboCopy one final time to pick up any new files. 1. Run the following command to update the paths. ```powershell Set-Location "C:\Program Files\Octopus Deploy\Octopus" $filePath = "YOUR ROOT DIRECTORY" & .\Octopus.Server.exe path --clusterShared "$filePath" & .\Octopus.Server.exe path --artifacts "$filePath\Artifacts" & .\Octopus.Server.exe path --taskLogs "$filePath\TaskLogs" & .\Octopus.Server.exe path --nugetRepository "$filePath\Packages" & .\Octopus.Server.exe path --imports "$filePath\Imports" & .\Octopus.Server.exe path --eventExports "$filePath\EventExports" & .\Octopus.Server.exe path --telemetry "$filePath\Telemetry" ``` :::div{.hint} Your version might not have all the above paths. Remove them from the script if you are running an older version of Octopus. - `Imports` was added in 2021.1 - `Telemetry` was added in 2020.x - `ClusterShared` was added in 2020.x ::: ### After moving database and file storage After you finish moving the database and file storage, it is time to turn back on your Octopus Deploy instance. 1. Turn back on the Octopus Deploy instance. If the instance fails to start, that indicates a database connection issue. 1. Log in to your instance. 1. Navigate to previous deployments. If you cannot see the task logs, that indicates a file storage issue. 1. Perform a couple of test deployments. 1. Assuming all goes well, disable maintenance mode. 1. Notify everyone of the new URL (if there is one). ## Adding additional nodes After configuring a load balancer and moving the database and files, adding a new node is trivial. 1. Create a new server to host Octopus Deploy. 1. Install the same version by downloading it from our [download archive](https://octopus.com/downloads/previous). 1. When the Octopus Manager loads, click the `Add this instance to a High Availability cluster` and follow the wizard. 1. Add that server to your load balancer. # Spaces Source: https://octopus.com/docs/administration/spaces.md With Spaces you can partition your Octopus Server so that each of your teams can only access the projects, environments, and infrastructure they work with from the spaces they are members of. Users can be members of multiple teams and have access to multiple spaces, but the entities and infrastructure they work with will only be available in the space it is assigned to. ## Spaces creates hard walls in your Octopus Server Spaces keeps the different projects and infrastructure your teams work with completely separate, which means something configured in **Space-A**, is not available to projects in **Space-B**. This makes it easier for large organizations with multiple teams using Octopus because each team member will only see the projects, environments, and infrastructure that is available in their space. If your organization has any of the following characteristics, you may find spaces extremely useful: - Many groups of engineers across many projects. - Requires separation of duties. - Completely autonomous teams of engineers, each responsible for their entire process. - Large number of projects or environments have created a cluttered dashboard and you just want to tidy them up. On the other hand, if you need to keep resources available to multiple teams on a system-wide basis, spaces will prevent you from sharing those resources. If this is the case, the default space is likely the best solution for you and your teams. By default, every instance of Octopus Server comes with a default space. However, if your organization is not planning to use multiple spaces, this default space can be safely ignored and doesn't require configuration or management. ## Managing spaces Spaces are managed by navigating to **Configuration ➜ Spaces**. An Octopus administrator, or a team member with sufficient permission, can create, remove or modify spaces from this screen. It is also possible to [change or disable the default space entirely](#change-the-default-space). Each space has a logo, which is also shown in the [space switcher](#switching-between-spaces) to make it easy to identify which space is currently focused upon by the UI. There is also a search filter to quickly find the spaces that you are interested in managing. :::figure ![Spaces configuration page](/docs/img/administration/spaces/images/spaces-configuration.png) ::: ### The space manager Each space has a *space manager*. The space manager is the administrator for that space and is responsible for managing users and teams within the space and assigning permissions to them. When creating a new space, you are required to nominate a team member (or a team) to the role of space manager. This space manager is then responsible for [managing teams and permissions](/docs/security/users-and-teams) within that space. The user who creates a space doesn't necessarily need to be the space manager of the space. This enables a 'hands off' administrative approach suited to larger organizations or those who prefer to separate the duties of Octopus Server Administration, from the duties of Team Administration. Behind the scenes, a **space managers** team is created, and any users that are nominated to be a space manager, are put in that team. This team cannot be created or deleted, and serves no other purpose than applying the correct space manager permissions. ### Create a space \{#create-a-space} New spaces are added from the configuration section of the portal. 1. To create a new space navigate to **Configuration ➜ Spaces** and select **ADD SPACE**. 2. Give the space a name. 3. Give the space a manager. This can be individual users or teams. Either can be selected from the drop-down menu. Click **SAVE**. 4. Provide a description for the space. 5. Optionally, upload a logo for the space. 6. Click **SAVE**. :::figure ![Add new space](/docs/img/administration/spaces/images/add-new-space.png) ::: ### Modify a space You can rename spaces, change their description, give them new logos, change the space managers, or stop the spaces task queue from processing. 1. Navigate to **Configuration ➜ Spaces** and select the space you want to modify. 1. Expand the field you would like to change. 1. Make your changes and click **SAVE**. :::figure ![Modify a space](/docs/img/administration/spaces/images/modify-space.png) ::: ### Delete a space You can delete spaces when you are the **space manager**. Deleting a space cannot be undone, and the space and all of its contents, including projects, environments, releases, and deployment history will be deleted. 1. Navigate to **Configuration ➜ Spaces** and select the space you want to delete. 1. Expand the **Task Queue Status** section and select the *Stop task queue* check-box, and click **SAVE**. 1. Click the overflow button and select **Delete**. 1. Enter the name of the space and click **DELETE**. ## Default space The **Default space** is provided to existing installations as a mechanism to ensure that the instance operates in much the same way as it did prior to upgrading to a version of Octopus that supports spaces. Enabled by default, its primary function is to provide an initial space for any existing resources. This also effectively hides the existence of spaces until you're ready to start using them. We create the default space when you install or upgrade your Octopus Server. In the case of an upgrade, we put all space scoped resources (like Projects, Environment, etc) into this space. For new installations, anything you create will be added to the default space. If you don't want to think about spaces, just leave everything in the Default space! ### Change or disable the default space \{#change-the-default-space} :::div{.warning} The following carries some minor downtime for any automation that relies on the default space being available. ::: To change the default space, follow these steps: 1. Navigate to **Configuration ➜ Spaces** and select the default space. 1. Expand the **Task Queue Status** section and select the **Stop task queue** check-box, and click **SAVE**. 1. Click the overflow button and select **Disable the default space**. 1. Enter the name of the space and click **YES I'M SURE**. Once you've done this, follow these steps: 1. Return to **Configuration ➜ Spaces** and select the space that you wish to nominate as the default space. 2. Click the overflow button and select **Enable the default space**. **Remove the default space** For organizations that are new to Octopus, especially those that make heavy use of spaces, a default space is not required, and you can remove the default space entirely. However, this comes with some considerations that should be weighed carefully against the needs of your organization. In addition to providing a home for existing resources, the default space allows any existing API calls that do not explicitly set a `Space Identifier` in the route to be routed to the default space. For example, in the case where the default space ID is `Spaces-1` then the route `/api/projects/my-project` is equivalent to `/api/Spaces-1/projects/my-project`. :::div{.warning} With a default space enabled, any REST API calls that do not specify a space in the URL will be assumed to be directed to the default space. **By turning off the default space**, this will no longer be the case. If you have a lot of bespoke automation relying on raw REST API calls, **you will need to make changes to ensure these scripts explicitly specify the space ID in the route**. Otherwise they will break with the default space turned off. ::: This means that by disabling the default space - **you are opting into a non-backwards compatible scenario**, so be prepared! Things to check include: - Versions of Tentacle on target environments need to be upgraded to [Tentacle 4.0.0](https://octopus.com/downloads/2019.1.1) the latest version otherwise they will not connect. - Scripts you've written that directly call the API. - Integrations with Octopus Server are updated to their latest versions (like Azure DevOps, TFS, and TeamCity plugins) - Community library templates that use the API are updated (you can [refer to this PR](https://github.com/OctopusDeploy/Library/pull/750) as a guide). To disable the default space, follow these steps: 1. Navigate to **Configuration ➜ Spaces** and select the default space. 1. Expand the **Task Queue Status** section and select the **Stop task queue** check-box, and click **SAVE**. 1. Click the overflow button and select **Disable the default space**. 1. Enter the name of the space and click **YES I'M SURE**. ## Switching between spaces \{#switching-between-spaces} When you log into the Octopus Web Portal, the first item on the navigation menu is the spaces menu. Click this icon to access the spaces you are a member of and select the space you need. ## System scoped or space scoped \{#system-scope-space-scoped} There is a hard barrier between spaces, so, for instance, a deployment target configured for Space-A isn't available to projects in Space-B. However, there are some things that aren't scoped to a space, and are available system-wide. The following table shows which Octopus resources are space-scoped, system-scoped, or scoped to both. :::div{.hint} If a resource isn't listed below, then it's space-scoped. ::: | Resource | Space-scoped | System-scoped | | --------------------- | --------------------------- | ------------- | | Environments | True | | | Lifecycles | True | | | Projects | True | | | Variable sets | True | | | Deployment targets | True | | | Tenants | True | | | Custom Step Templates | True | | | Octopus Server nodes | | True | | Authentication | | True | | Users | | True | | License | | True | | Events | True | True | | Teams | True | True | | Tasks | True | True | ## Automation changes to be aware of \{#automation-changes} As always, using our client libraries offer the best chance of a successful upgrade for your existing automation, and our latest release of Octopus Client has all the changes required to interoperate with any version of Octopus, as do most of our plugins for other build systems. However, due to the depth and breadth of the changes required to make spaces a reality, we weren't able to maintain backwards compatibility for the REST API in all cases. Please refer to [release notes](https://octopus.com/downloads/compare?from=2018.12.1&to=2019.1.0) for a complete list of breaking changes. # Update Argo CD Application Image Tags Source: https://octopus.com/docs/argo-cd/steps/update-application-image-tags.md The Update Argo CD Application Image Tags step is responsible for iterating over your Argo CD Application's repository, and updating the image tag for referenced container images. ## Container Images Add package references for each container image you would like to update when you run your deployment. Unreferenced container images in your manifests will not be changed by Octopus. When targeting a Helm-based application source, each referenced container image should have a populated **Helm image tag path** field. This is the YAML path to the specific field in your values file that contains the tag of the referenced container image (for example, `agent.image.tag`). This can be set in the **Reference a package** drawer when adding or editing a package reference. This is not required for directory or Kustomize sources. :::figure ![The Helm image tag path field in the Reference a package drawer](/docs/img/argo-cd/update-application-image-tags-helm-values-tag.png) ::: Depending on what the helm value contains: - Only the tag - it will be replaced with the package's version, with no further validation or checking. - Tag, ImageName and repository, these fields will be validated against the step package's properties to ensure the correct data is being inserted. - If the namespace/repository do not align with the step package, tag replacement will not be performed. :::div{.info} Using the step-based notation may not be appropriate for complex use cases (e.g. updating multiple sources from the one deployment). In such cases, [Helm Annotations](/docs/argo-cd/annotations/helm-annotations) may be required. Note: Helm Annotations will only be considered during step execution when **no** helm-image-tag-paths have been defined in the step directly. ::: If the application cluster's default registry has been changed, see [cluster annotations](/docs/argo-cd/annotations/cluster-annotations) to ensure the correct default registry is shared with Octopus. :::div{.info} These packages can then be used in an [external feed trigger](/docs/projects/project-triggers/external-feed-triggers), such that your cluster is automatically updated when new image versions become available. ::: ## Creating and Deploying a Release :::div{.info} The step will fail to execute if no git credentials exist for repositories referenced by your Argo CD Applications. As such, prior to execution, it is recommended to use the [Argo CD Applications View](/docs/argo-cd/steps/argo-cd-applications-view) to ensure no outstanding configuration is required. ::: When a release of the project is created, a snapshot of the versions of container images referenced in the step is taken. For each application with relevant scoping annotations found during a deployment, Octopus will checkout each repository using git credentials determined based on [repository restrictions](/docs/infrastructure/git-credentials#repository-restrictions). ### How Octopus updates image tags varies for each source type For Kubernetes YAML: - Octopus searches for Kubernetes resources which are known to reference images (CRDs are not included) - If a resource references an image configured as a referenced container image, the image tag is updated to match that in the release For Helm charts: - Image fields are extracted from the [Helm Annotations](/docs/argo-cd/annotations/helm-annotations) - Matching image tags in the `values.yaml` are replaced with container image versions from the release For Kustomize applications (i.e. supplied path contains `kustomization.yaml`, `kustomization.yml` or `Kustomization`): - Octopus will *only* update the `newTag` field(s) found in the Kustomize file. No other files will be edited. Finally, changed files are committed and pushed back to the repo/branch specified by the Argo CD Application A PR will be created (rather than merging to the `targetRevision` branch) if configured in the step UI. # Update Argo CD Application Manifests Source: https://octopus.com/docs/argo-cd/steps/update-application-manifests.md The Update Argo CD Application Manifests step is responsible for generating a set of Argo CD Application manifests from a set of [Octostache](https://github.com/OctopusDeploy/Octostache) template files, which have been populated with Octopus variables. This step is agnostic of the application source repository content. Regardless of what's in the source repository, the step writes populated templates to the path specified in the Argo CD application source. The directory structure in the input template source is maintained when the populated templates are copied into your Argo CD application's path. If the target application source is has a "path" field set, the template directory structure will be copied under this path. If the target application source is a "ref" source (without a path field), the template directory structure will be copied into the root directory of the repository. If a source has both a "ref" and "path" field, the step will take no action, due to ambiguity around desired output path. If required, the output path to which the templates are copied can be overridden via the `argo.octopus.com/path.` annotation (where is the name defined in the source to be updated, or blank if the application has only a single source). If provided, it should specify the path, from the root of the repository, into which the manifest-step should copy the populated templates. The following provides instructions on how to configure an Update Manifests step, constraints on its usage, and how it executes. ## Manifest Templates 1. Specify the set of input template files which can be sourced from either: - A git repository (requires URL, credentials and branch-name), or a - Package from a configured feed (e.g. a zip file, NuGet package, etc.) 2. Specify the path to your templates - A subfolder (or file) within the previously specified repository/package which contains the template files to be used - If the string entered is a directory, all files (recursively) within that directory are considered templates - If the string entered is a single file - only that file will be considered a template. :::div{.info} A single file will be copied into the *root* directory of the path defined in the mapped Argo CD Application. When a directory is specified, the structure below the specified path is maintained when moving files into the Argo CD Application's repository. ::: 3. Container images can be defined, but are optional. These let you attach automated processes, such as external feed triggers, to the project. ## Git Commit Settings In addition to the [common Git commit settings](/docs/argo-cd/steps#git-commit-settings), this step also provides the following option: ### Purge Argo CD Source Folder Purging the source folder clears the `Path` directory of the Argo CD Application's repository prior to adding newly templated files. - This can be useful when resources have been removed from your input templates, but also need to be removed from the target repository. ## Creating and Deploying a Release :::div{.info} The step will fail to execute if no git credentials exist for repositories referenced by your Argo CD Applications. As such, prior to execution, it is recommended to use the [Argo CD Applications View](/docs/argo-cd/steps/argo-cd-applications-view) to ensure no outstanding configuration is required. ::: When deploying a release containing an Update Argo CD Applications Manifest step, Octopus will: - Collect input-templates configured in the step - Populate the templates with Octopus [variables](/docs/best-practices/deployments/variables) - For each mapped Argo CD Application - Clone each source repository - Copy populated templates into the source repository - Changed files are committed, and pushed back to the repo/branch as specified in the Argo CD Application - A PR will be created (rather than merging to the targetRevision branch) if configured in the step UI :::div{.warning} If an input template references a [Sensitive Variable](/docs/projects/variables/sensitive-variables), the deployment will fail. This ensures sensitive data is not persisted in the target Git repository in plain text. ::: ## Example Manifests ### Config Map The following represents a template of a configmap. The `database_url` is set via the user-specified project variable `DB_NAME`, whose value can change based on Octopus variable scoping mechanisms (e.g. via environment or tenant). The time of the deployment, defined in the inbuilt `Octopus.Deployment.Created`, is written to the `deployment_created_at` field. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: my-app-config data: # Key-value pairs for configuration data log_level: INFO directory: "#{Octopus.Environment.Name}" feature_flag_enabled: "true" database_url: "jdbc:postgresql://mydb.example.com:5432/#{DB_NAME}" deployment_created_at: "#{Octopus.Deployment.Created}" ``` # Environments, Deployment Targets, and Target Tags Source: https://octopus.com/docs/best-practices/deployments/environments-and-deployment-targets-and-roles.md [Deployment targets](/docs/infrastructure/deployment-targets/) are what Octopus Deploy deploys to. They can be Kubernetes (K8s) clusters, Windows servers, Linux servers, Azure Web Apps, and more. Please refer to [Deployment targets](/docs/infrastructure/deployment-targets/) for an up to date list on deployment targets. [Environments](/docs/infrastructure/environments) are how you organize your deployment targets into groups that represent different stages of your deployment pipeline. These stages are typically given names such as **development**, **test**, and **production**. [Target tags](/docs/infrastructure/deployment-targets/target-tags) (formerly target roles) are a filter to select specific deployment targets in an environment. ## Deployment Target, Environment, and Target Tag relationship \{#deployment-target-environment-and-role-relationship} Environments are how you group deployment targets in a stage in your deployment pipeline. Target tags are how you identify which deployment targets you wish to deploy to in that specific stage. When you register a deployment target, you must provide at least one environment and one target tag. :::figure ![Environments and target tags for a deployment target](/docs/img/getting-started/best-practices/images/registering-deployment-target.png) ::: In the deployment process, you assign steps to run on deployment targets with specific target tags. :::figure ![Deployment process target tag assignment](/docs/img/getting-started/best-practices/images/target-roles-in-deployment-process.png) ::: For example, imagine you have three deployment targets in the **development** environment with the following target tags: - dev-server-01: `hello-world`, `hello-world-api`, `hello-world-ui`, and `IIS-Server-2019` tags - dev-server-02: `hello-world-api` and `IIS-Server-2019` tags - dev-server-03: `octo-petshop-api` and `IIS-Server-2019` tags The deployment process from above targets the `hello-world-api` tag. When a deployment to the **development** environment is triggered, Octopus will only select the two servers assigned to the **development** environment AND with the `hello-world-api` target tag. :::figure ![Octopus selecting deployment targets](/docs/img/getting-started/best-practices/images/selecting-target-roles.png) ::: :::div{.hint} Assigning multiple target tags to a deployment step results in an OR statement. For example, adding the target tag `octo-petshop-api` to the deployment process and deploying to the **development** environment will result in the filtering logic to be: All servers in the **development** environment AND the servers with the target tags `hello-world-api` OR `octo-petshop-api`. For software developers, you can rewrite that sentence as: `If (server.Environment == "development" && (server.TargetTag == "hello-world-api" || server.TargetTag == "octo-petshop-api"))` Using the example from above, Octopus would select all three servers. ::: ## Environment and Target Tag usage differences \{#environment-and-role-usage-differences} Environments are designed as a macro grouping of deployment targets meant for use across multiple projects, variable sets, and more. Below is a list of items where environments are used: - Lifecycles - Project Variable scoping - Variable Set scoping - Log filtering - Tenant variable scoping - Accounts - Certificates - Deployment targets - Process step scoping (only run a step on a specific environment) Target tags are designed as a micro grouping of deployment targets meant to deploy a specific project or application component. Below is a list of items where tags are used: - Project Variable scoping - Process step scoping (run this step for specific environments) :::div{.hint} A deployment target can be assigned to 1 to N environments and 1 to N target tags. ::: ## Environments Adding an environment is a non-trivial task, as it involves adding/updating additional deployment targets, variable scoping, lifecycles accounts, certificates, and more. There is a direct correlation between a high number of environments and poor maintainability, usability, and performance. Our recommendations for environments are: - Keep the number of environments per space to be between 2 and 10. - Name environments to match your company's terminology so you can re-use them across projects. Common names include **development**, **test**, **QA**, **acceptance**, **uat**, and **production**. - If you have between one and five data centers (including cloud regions), it's okay to have an environment per data center. For example, **Production - AU** for a data center in Australia and **Production - Central US** for the Azure Central US region. If you have more than five data centers, consider [tenants](/docs/tenants) where each data center is a tenant. - It's okay to have team-specific environments, similar to data center environments. Although if you have more than five or six teams, consider [tenants](//docs/tenants/) where each team is a tenant. Antipatterns to avoid are: - Project names in your environments. An environment name of **QA - OctoPetShop** indicates you need to either have more specific target tags on your deployment targets or you need to leverage spaces to isolate that application. Project-specific environments are a good indicator to consider [spaces](/docs/administration/spaces). - Branch names in your environment names. Consider using temporary [tenants](/docs/tenants) for your branch names or storing your branch name in a pre-release tag in the release version. - A single deployment environment, **production**. You should have at least one test environment to test and verify your release. ## Target Tags \{#roles} There is also a direct correlation between generic target tags, such as `web-server` and the number of environments. As stated earlier, adding an environment is a non-trivial task, and leads to maintenance and performance overhead. The goal is to keep the number of environments low. Using the generic target tag `web-server` will pick all the servers in a specific environment. For example, you have 25 servers in **production** with the tag `web-server`. When you deploy to **production**, Octopus will pick all 25 servers, but you only want to deploy to 4 of them. There is no automatic way to limit the servers picked without either creating a specific tag `hello-world-api` or creating a new environment. Generic target tags also impact your future flexibility. For example, using `web-server` for 100 projects would require all targets on all environments to host those same 100 projects. If you were to decide in six months to split up those servers, you'd have to update over 100 projects. Our recommendations for target tags are: - Avoid generic tags, such as `web-server`, whenever possible. - Use specific tags, `hello-world-api`, to uniquely identify a project and component to deploy. Use those specific tags in your deployment process. - Use architecture and platform-specific tags, for example, `IIS-Server-Windows-2019`. Use those tags for everyday maintenance tasks; updating to the latest version of Node.js, or installing a patch. :::div{.hint} Add an environment for a business need; a new data center is brought online, you are adding your disaster recovery location into Octopus, or adding the ability for customers to test changes prior to **production**. Add a new target tag to group servers and filter servers within each environment. ::: ## Further reading For further reading on environments, deployment targets, and target tags in Octopus Deploy please see: - [Deployment Targets](/docs/infrastructure/deployment-targets) - [Environments](/docs/infrastructure/environments) - [Target Tags](/docs/infrastructure/deployment-targets/target-tags#create-target-roles) # Disaster Recovery Source: https://octopus.com/docs/best-practices/self-hosted-octopus/disaster-recovery.md This guide will help you set up a hot/cold disaster recovery configuration for an Octopus Deploy instance. :::div{.hint} This implementation guide will help set up a hot/cold disaster configuration. In our research, a multi-zonal high-availability instance will cover 90% of disaster recovery cases.  A secondary, or disaster recovery instance, is meant when an entire cloud region goes offline.  If you are looking for more details on our recommendations, please refer to our white paper on [Best Practices for Self-Hosted Octopus Deploy HA/DR](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). ::: You must consider a disaster recovery solution for each of the following components of Octopus Deploy. - **URL / load balancer** The UI, API, and Polling tentacle ingress for the DR Octopus Deploy instance.  Ideally, you'd have two URLs, one specific for the DR instance for testing and a global URL to switch between the primary and secondary data center instances. - **Octopus Server nodes** These run the Octopus Server service. They serve user traffic and orchestrate deployments.  You must create or start these nodes in the secondary data center. - **A database** This database stores Most of the data used by the Octopus Server nodes. You'll need a mechanism to back up the database's data to the secondary data center and access it once a DR event occurs. - **Shared storage** Similar to the database, you'll need a mechanism to back up the shared storage to the secondary data center and access it once a DR event occurs. ## High Availability configuration required For a disaster recovery plan to work, you must configure your Octopus Deploy instance to support high availability.  The default installation of Octopus Deploy will configure SQL Server and the file share to run on the same virtual machine or Kubernetes cluster as the Octopus instance.  Configuring an instance for high availability will require you to: - Move the database to an SQL Server hosted on a different server than the Octopus Deploy instance.   - Move the files to a network file share hosted on a different server, such as the Octopus Deploy instance. - Leverage a load balancer or network traffic device for user access to the UI and API. ## High Availability and Disaster Recovery Before setting up a disaster recovery instance, we recommend following our guide for configuring [High Availability](/docs/best-practices/self-hosted-octopus/high-availability).  If you are hosting your Octopus Deploy instance in a cloud provider, you can leverage availability zones within each cloud region.  Managed services, such as Azure SQL or AWS RDS, often provide zonal redundancy with minimal configuration.  They will automatically fail over to an availability zone if one of the zones were to go offline. We recommend using Octopus Deploy's high availability functionality for a hot/cold configuration between cloud regions or self-managed data centers.     - Hot/hot configurations between cloud regions or data centers are unsupported.   - Octopus Deploy is sensitive to network latency, so any nodes in the secondary data center will have a degraded experience.   - Expect a nearly unusable experience on the nodes running in the secondary data center if the secondary data center requires a connection via an undersea cable (North America -> Europe, Europe -> Australia, etc.).   - Due to latency, Public cloud providers do not provide a hot/hot configuration for their managed services between their cloud regions. - Hot/hot configurations between availability zones within a cloud region in a public cloud provider are supported.  That should cover 90% of all possible disaster recovery use cases. - Because of those factors, a hot/warm configuration will cost a lot of money per year for something that is almost never used. ## Disaster Recovery Events A disaster recovery event consists of two sub-events. - Starting the Octopus Deploy instance in the secondary data center or cloud regions. - Restarting the Octopus Deploy instance in the primary data center or cloud regions. The challenge you'll face for either is getting data and files copied between data centers or cloud regions and keeping any data loss to a minimum.  Most tooling and managed services must asynchronously copy data due to latency.   ### Failover to Secondary Below are steps to perform when starting an instance in the secondary data center or cloud region. :::div{.hint} **Important:** Before failing over to the secondary region or data center, consider why the outage happened. Was it a DNS configuration, and will the region return online in under an hour? Was it a weather event that caused power outages with no expected time frame for recovery? Or was it an earthquake that destroyed all the availability zones in the region? It might be best to wait until the primary region is back online. ::: - Database and File Storage - If using geo-replication: - Promote the read-only database in the secondary region as the primary database. - "Failover" or promote the read-only file storage as the primary one. - If forgoing geo-replication: - Create the database and file storage from the most recent backup. - [Update the connection string and file storage configuration](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-database#step-by-step-process) entries to the database and file storage in the secondary region. - Octopus Deploy - Create or start the Octopus nodes in the secondary region. - Enable [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). - Ensure you remove all the nodes from the primary region by going to **Configuration -> Nodes.** - Update the task cap on the nodes in the secondary to your desired amount (the default is five). - Perform test deployments. - Disable [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). - Load Balancer - Update the load balancer direct user and polling tentacle traffic to the secondary region :::div{.hint} **Important:** All nodes must run the same version of Octopus Deploy.  During a disaster recovery event, avoid upgrading Octopus Deploy unless directed by our support engineers.  If you upgrade the secondary data center, you'll need to upgrade your nodes in the primary data center when it comes back online. ::: ### Move back to Primary Below are steps to perform once the disaster recovery event is over and you can return to the primary data center or region. - Octopus Deploy: - Turn off all the nodes in the primary region. - Turn off all the nodes in the secondary region. - Database: - If using geo-replication - follow the cloud provider's documentation to "failover" to the primary region. Wait until the replication has finished replicating all data to the primary region. - If forgoing geo-replication - create a backup of the secondary region's database and restore it over the existing database in the primary region. - File Storage: - If using geo-replication - follow the cloud provider's documentation to "failover" to the primary region. Wait until the replication has finished. - If forgoing geo-replication - copy all the files from the secondary region to the primary region. - Octopus Deploy: - Turn on all the Octopus nodes in the primary region. - Enable [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). - Remove any nodes from the secondary region by going to **Configuration -> Nodes.**   - Perform test deployments. - Disable [maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode). - Load Balancer - Route user and polling tentacle traffic back to the primary region. - After verifying the primary region is back online, destroy or turn off the virtual machines or delete the containers in the secondary region. ### Disaster recovery test recommendations All disaster recovery plans must be periodically tested so you know they'll work when a disaster occurs. If you are using managed services, you'll likely impact users when you test the disaster recovery plan. That's because you use the managed services' "failover" functionality and route all user traffic to the secondary region. To test your disaster recovery plan without impacting your users, you must create a new file system and database from backups. Create new Octopus nodes and point them to those new resources. You can use this [script from our documentation](https://oc.to/disable-all-resources-script) to disable all the targets, triggers, and anything else to prevent accidental deployment. Whatever your disaster recovery plan, we recommend testing it to be as realistic as possible. ### Mitigating Risk Using a public cloud provider has multiple benefits. Most managed services natively support geo-redundancy, have well-documented business continuity plans, and more.   However, what is rarely discussed is what happens if all the zones in a cloud region go offline.  Everyone in that region will start executing their disaster recovery plans.  Some cloud providers, such as Azure, have a preferred secondary region via their region pairs.  That means everyone else using the primary region will attempt to create virtual machines and other resources in the secondary region.  That can delay your recovery time. If Octopus Deploy is a critical application for your company, we recommend staging the infrastructure.  All the cloud resources are pre-configured but turned off to save costs. When hosting Octopus on Windows virtual machines (VMs), we recommend creating new VMs in the secondary region each time you upgrade the instance. That's preferred over long-lasting VMs. Long-lasting VMs are typically outdated, with older versions of Octopus, or haven't had the latest Windows patches. When hosting Octopus on Kubernetes, ECS, or ACS containers, you only need to ensure the clusters are running. The Octopus container already has Octopus installed. We do not clean up old versions of images. You can pull them on demand.  If speed is an issue, you can pre-fetch images. How you pre-pull images will depend on the provider. We recommend consulting your provider's documentation. :::div{.warning} Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method. ::: ## Infrastructure Below are our recommendations for configuring the necessary infrastructure for a disaster recovery instance. ### Database recommendations For the SQL Server, we recommend using a managed SQL Server, such as AWS RDS, Azure SQL, or GCP Cloud SQL: Configure zonal redundancy or always-on high availability groups. - [Azure SQL zone redundant databases](https://learn.microsoft.com/en-us/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell) - [AWS SQL Server Always On availability groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerMultiAZ.html) - [GCP Cloud SQL for SQL Server high availability](https://cloud.google.com/sql/docs/sqlserver/high-availability) In the secondary region, create a read-only copy - or read replica - and use asynchronous geo-replication. The only time this database will get used is when all availability zones in the primary region go offline. - [Azure failover groups](https://learn.microsoft.com/en-us/azure/azure-sql/database/active-geo-replication-overview?view=azuresql) - [AWS - Creating a read-only replica in a second region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.XRgn) - [GCP - Cross-region read replica](https://cloud.google.com/sql/docs/postgres/replication/cross-region-replicas#promote-a-replica) If you wish to learn more about how to configure Octopus Deploy with a specific hosting option, please refer to our installation guides. - [Self-Managed SQL Server](/docs/installation/sql-database/self-managed-sql-server) - [AWS RDS](/docs/installation/sql-database/aws-rds) - [Azure SQL](/docs/installation/sql-database/azure-sql) - [GCP SQL](/docs/installation/sql-database/gcp-cloud-sql) ### File storage recommendations We recommend using managed file storage, such as AWS FSx or EFS, Azure File Storage, or GCP Filestore. Ensure you configure the file share for at least zonal replication. If available, consider geo-replication for a read-only copy in the secondary cloud region. Depending on the cloud provider, you can have a read-only copy of the file storage automatically created. For example, Azure File Storage's GZRS creates 3 copies of the files in the primary region, with a fourth automatically created in the secondary region. We've included guides for the most common file storage options we encounter. - [Local File Storage](/docs/installation/file-storage/local-storage) - [AWS File Storage](/docs/installation/file-storage/aws-file-storage) - [Azure File Storage](/docs/installation/file-storage/azure-file-storage) - [GCP File Storage](/docs/installation/file-storage/gcp-file-storage) ### Load balancer recommendations Use a global load balancer to route Octopus Deploy http/https traffic between the primary and secondary regions. You benefit from having a single URL for users to access Octopus. In a DR event, route all traffic to the secondary regions. Octopus Deploy only has two possible inbound connections. 1. Web UI / Web API over http/https (ports 80/443) 2. Polling Tentacles over TCP (port 10943) #### User Interface Load Balancer We've created guides for configuring many popular load balancers. - Local Options - [Using NGINX as a reverse proxy with Octopus](/docs/installation/load-balancers/use-nginx-as-reverse-proxy) - [Using IIS as a reverse proxy with Octopus](/docs/installation/load-balancers/use-iis-as-reverse-proxy) - [Configuring Netscaler](/docs/installation/load-balancers/configuring-netscaler) - [AWS Load Balancers](/docs/installation/load-balancers/aws-load-balancers) - [Azure Load Balancers](/docs/installation/load-balancers/azure-load-balancers) - [GCP Load Balancers](/docs/installation/load-balancers/gcp-load-balancers) #### Polling Tentacles Polling Tentacles deserve special attention due to how they work with Octopus Deploy. You must register each node that processes tasks with every Polling Tentacle. We recommend a dedicated URL for each node in the primary region and routing all traffic through a load balancer or a traffic manager. When you fail over to the secondary region, update the dedicated URLs to point to a corresponding node in the secondary region. For example, a unique address per node with the default port of `10943` would be: - Node1: Octo1.domain.com:10943 - Node2: Octo2.domain.com:10943 - Node3: Octo3.domain.com:10943 ### Octopus Deploy Nodes Generally, during a disaster recovery event, you'll need to add nodes to an existing high availability cluster.  The difference is you will be replacing all the existing nodes from the primary data center or region.  Octopus Deploy stores the nodes in the database.  Because you restored a copy of the database, all the nodes in the primary data center will still be in the database.  Part of the replacement process will remove those pre-existing nodes.   :::div{.hint} **Important:** All nodes must run the same version of Octopus Deploy.  During a disaster recovery event, avoid upgrading Octopus Deploy unless directed by our support engineers.  If you upgrade the secondary data center, you'll need to upgrade your nodes in the primary data center when it comes back online. ::: The process for replacing a node is: 1. Ensure the new host, Windows or Containers, can connect to the Octopus Deploy database and file storage. 1. Run a script to configure the Octopus Server node instance on a Windows machine or start a new container. You'll need to provide the master key and database connection information. For containers, you'll also need to provide the volume mounts. 1. Add that new node to the load balancers. 1. Update the virtual address for the polling tentacles to point to the new node. 1. Remove the previously existing node from the nodes table by going to **Configuration -> Nodes** in the Octopus Deploy UI (click the overflow menu `...` next to the node to remove).  Failure to do so could result in your instance being out of compliance with your license, and you'll be unable to deploy. :::div{.warning} Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method. ::: :::div{.hint} Because all the configurations are stored in the database and blob storage, you can delete all the nodes and create new ones if desired. ::: We recommend writing scripts to automate this process.  Below are some scripts to start automating the adding of nodes to existing clusters. - [Octopus Server on Windows](/docs/installation/automating-installation) - [Octopus Server Linux Container](/docs/installation/octopus-server-linux-container) - [Octopus Server in Kubernetes](/docs/installation/octopus-server-linux-container/octopus-in-kubernetes) # Version automation with Service Fabric application packages Source: https://octopus.com/docs/deployments/azure/service-fabric/version-automation-with-service-fabric-application-packages.md In this section, we will discuss some ways Octopus Deploy can help with versioning your Service Fabric applications. Versioning in Service Fabric is a complex topic and the ideas discussed here are suggestions and possible options, not hard and fast rules. ## Application and Service Versions A Service Fabric application is not a single physical "thing". It is the combination of one or more services. Each service has its own individual version, based on its code and configuration versions. The combination of service versions then make up the overall application version. ### Code and config versioning As mentioned above, each service that makes up an application can be versioned independently. One strategy for managing these versions is to have the developers manually update them in the solution's manifest files. This is how the [Visual Studio based deployment model](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-tutorial) works, and is the default behavior you will get from Octopus Deploy if no other action is taken. When using an automated build system as part of a Continuous Delivery pipeline, it is common to stamp all binaries in the build as a set, with the same version number. Mature build tools will have a mechanism for easily managing the version number and assigning it to the assemblies during the build. The service code versions in the manifest XML files should also be assigned the same version number. The build tools are less likely to have an easy way to manage this. Fortunately, Octopus Deploy has a way to manage this, as we'll see below. The first step to setting this up is to update the service's manifest with specific variable names that we'll define in Octopus later. Repeat this for each service, using `Service_CodeVersion` for all services and varying `MyStatelessService_ConfigVersion` per service: ```xml MyStatelessService.exe ... ``` Using this approach, the service's overall version is always a combination of its code and config version. Next, we use a similar approach to build the application's overall version, based on the services' code version and config versions: ```xml ... ``` If you have multiple services, you'll probably want a more efficient abbreviation scheme to keep the version strings to a reasonable length. From the code side, that's all there is to do. Next, let's look at the Octopus Deploy variables we now need to define: | Name | Value | Scope | | ---- | ----- | ----- | | Service_CodeVersion | `#{Octopus.Action.Package.PackageVersion}` | | | MyStatelessService_ConfigVersion | 1.0.0 | | The important part of this is `Octopus.Action.Package.PackageVersion`, which is the version taken from the package that was uploaded to the Octopus package feed. This lets us easily flow the version number from the build through to the deployment. From this point the code and overall versions will all be handled automatically. The config versions though are a little complicated and still require some manual handling. They are complicated in that each service may have any number of configuration values, and when any one of them changes, then the service's config version should be bumped. This isn't something that Octopus has automatic support for, so these versions have to be handled manually. To illustrate with an example, consider an application that is made up of two services (one that needs a connection setting and one that needs a port setting). We might end up with Octopus variables as follows: | Name | Value | Scope | | ---- | ----- | ----- | | MyStatelessService1_ConfigVersion | 1.0.0 | | | MyStatelessService1_Connection | abc | | | MyStatelessService2_ConfigVersion | 1.0.0 | | | MyStatelessService2_Port | 8000 | | If the connection setting for the first service was changed, then the resulting variables should be: | Name | Value | Scope | | ---- | ----- | ----- | | MyStatelessService1_ConfigVersion | **1.0.1** | | | MyStatelessService1_Connection | **xyz** | | | MyStatelessService2_ConfigVersion | 1.0.0 | | | MyStatelessService2_Port | 8000 | | ### Environments and tenants Any feature of Octopus that causes one of the service variables to be scoped creates further complexity, as the version needs to also be scoped to the most granular level that the variables are being scoped to. To illustrate, let's expand on the previous example and say the connection for the first service is environment specific. Now the variables would look like this: | Name | Value | Scope | | ---- | ----- | ----- | | MyStatelessService1_ConfigVersion | 1.0.0 | Dev | | MyStatelessService1_Connection | abc | Dev | | MyStatelessService1_ConfigVersion | 1.0.0 | UAT | | MyStatelessService1_Connection | def | UAT | | MyStatelessService2_ConfigVersion | 1.0.0 | | | MyStatelessService2_Port | 8000 | | Any changes to the variables will also change the version in the same environment scope. For example, let's say the connection for the UAT environment needs to change, then the resulting variables would be: | Name | Value | Scope | | ---- | ----- | ----- | | MyStatelessService1_ConfigVersion | 1.0.0 | Dev | | MyStatelessService1_Connection | abc | Dev | | MyStatelessService1_ConfigVersion | **1.0.1** | UAT | | MyStatelessService1_Connection | **xyz** | UAT | | MyStatelessService2_ConfigVersion | 1.0.0 | | | MyStatelessService2_Port | 8000 | | # Import certificate to Windows certificate store Source: https://octopus.com/docs/deployments/certificates/import-certificate-step.md The *Import Certificate* step can be used to import a certificate managed by Octopus into a Windows Certificate Store. ## Import details ### Store location The certificate can be imported to the *Local Machine* or *Current User* locations, or enter a *Custom User* to install the certificate for. ### Store name The store name can be one of the built-in Windows stores, or you can define a custom store name to use. ### Private key If the certificate has a private-key, it can be marked as exportable, and access can be granted to specific users. The Administrators group on the target machine will always be granted access to the private-key. :::figure ![](/docs/img/deployments/certificates/images/import-certificate-step-edit.png) ::: ## Recommended practice :::div{.hint} It is recommended to allow Octopus to perform the initial import of a certificate. ::: This avoids potential issues with accessing certificates imported by different accounts. If the certificate is already imported on the target machine and issues are encountered, try removing the certificate. # Using variables in scripts Source: https://octopus.com/docs/deployments/custom-scripts/using-variables-in-scripts.md Octopus allows you to [define variables](/docs/projects/variables/) to customize your deployments. These variables, along with some [predefined variables](/docs/projects/variables/system-variables), will automatically be made available to your scripts as global variables. :::div{.warning} **All variables are strings** Note that in scripts **all Octopus variables are strings** even if they look like numbers or other data types. You will need to cast to the appropriate type before using the value if you need something other than a string. ::: Let's consider an example where we have defined a project variable called `MyApp.ConnectionString`.
PowerShell ```powershell # It's a good idea to copy the value into a local variable to avoid quoting issues $connectionString = $OctopusParameters["MyApp.ConnectionString"] Write-Host "Connection string is: $connectionString" ```
C# ```csharp // It's a good idea to copy the value into a local variable to avoid quoting issues var connectionString = OctopusParameters["MyApp.ConnectionString"]; Console.WriteLine("MyApp.ConnectionString: " + connectionString); ```
Bash ```bash # It's a good idea to copy the value into a variable to avoid quoting issues connectionString=$(get_octopusvariable "MyApp.ConnectionString") echo "Connection string is: $connectionString" ```
F# ```fsharp // It's a good idea to copy the value into a variable to avoid quoting issues // tryFindVariable : name:string -> string option let connectionString = Octopus.tryFindVariable "MyApp.ConnectionString" match connectionString with | Some x -> printf "Connection string is: %s" x | None -> printf "Connection string not found" // Or one of the simplified versions // Throws KeyNotFoundException when variable does not exist // findVariable : name:string -> string let connectionString = Octopus.findVariable "MyApp.ConnectionString" // Returns default value when variable does not exist // findVariableOrDefault : defaultValue:string -> name:string -> string let connectionString = Octopus.findVariableOrDefault "Default Value" "MyApp.ConnectionString" ```
Python3 ```python connectionString = get_octopusvariable("MyApp.ConnectionString") print(connectionString) ```
:::div{.success} To see the F# API available to your F# scripts, take a look at our [F# signature file](https://github.com/OctopusDeploy/Calamari/tree/master/source/Calamari.Common/Features/Scripting/FSharp/Bootstrap.fsi). ::: ## Variables in PowerShell scripts {#variables-in-powershell} In PowerShell we have pre-defined some script-scoped variables for you as a convenience. Consider the same example as before, a variable named "MyApp.ConnectionString" will be available as both: - `$OctopusParameters["MyApp.ConnectionString"]` - `$MyAppConnectionString` In the first form the variable name appears just as they appear in the Octopus Web Portal, while in the second example special characters have been removed. The first form is the most flexible, but in some cases the second form may be more convenient. # MySQL flyway deployment Source: https://octopus.com/docs/deployments/databases/mysql-flyway.md [Flyway](https://flywaydb.org/) is a popular open source [migrations-based](https://octopus.com/blog/sql-server-deployment-options-for-octopus-deploy) database deployment tool supported by Redgate. It's a command-line utility that uses Java to execute script files against several database technologies such as Microsoft SQL Server, MySQL, MariaDB, and PostgreSQL. There is a free Community edition, and paid Pro and Enterprise versions available. This guide demonstrates how to use Flyway with a MySQL database. ## Include Flyway with your project To add Flyway to your project: 1. [Download the archive file](https://flywaydb.org/download/). 1. Extract the archive to disk. 1. Move the files into your project directory structure. The Flyway download comes with everything it needs to execute, including a version of the Java Runtime Environment (JRE): :::figure ![Flyway included in a Visual Studio project](/docs/img/deployments/databases/mysql-flyway/images/visual-studio-code-add-flyway.png) ::: :::div{.hint} If Flyway doesn't find Java installed on the machine (detected by the presence of the JAVA_HOME environment variable), it will fall back to the included JRE. The included version of the JRE has the .exe and .dll files located within a `bin` subdirectory. It is often the case that source control will ignore any directory with the name `bin`, so be careful when including a Flyway project and you need the included JRE. ::: ## Add scripts to your Flyway project Within the Flyway directory structure is a directory called `sql`. This directory is where your scripts belong. To control the execution order, the [documentation](https://flywaydb.org/documentation/) states the files must be named in a specific way. Flyway is capable of doing versioned migrations, undo migrations, and repeatable migrations. All script files follow this naming structure: - Prefix: V for versioned, U for undo, and R for repeatable (this guide will focus on versioned migrations). - Version: Numbers with dots or underscores as separators. - Separator: Two underscores. - Description: A meaningful name with underscores or spaces to separate the words. - Suffix: Usually `.sql`. Example filenames are: - `V1__initDB.sql` - `V1_1__populateDb.sql` - `V1.1__populateDb.sql` ## Execute a migration Flyway is a command-line utility that was originally designed to be cross-platform so the downloadable archive will work on either Windows or Linux. For Windows, the `flyway.cmd` file is used when executing. For Linux, the file `flyway` is a Bash script for execution. Both OS methods use the same arguments for deployment. ## Including Flyway in your build Flyway itself is already compiled, so there's no need to do anything for building. However, it can still be included in a build process to package it up for deployment with Octopus Deploy. This guide uses Jenkins as the build platform. ## Add a package step Within a Jenkins project, navigate to **Build Environment**, and in the **Build** section, click **Add Build Step** and choose **Octopus Deploy Package application**. :::div{.hint} The [Octopus Deploy Jenkins plugin](/docs/packaging-applications/build-servers/jenkins/#install-the-octopus-jenkins-plugin) needs to be installed to use these templates. You also need to download the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) on to the Jenkins build agent(s). ::: Fill in the inputs: - Package ID: A unique name for this package like `petclinic.mysql.flyway`. - Version Number: The unique version number for this package. - Package format: Zip or nuget. - Package base directory: `${WORKSPACE}\flyway`. - Package include paths: - Package output directory: `${WORKSPACE}`. ### Jenkins build number formatting To configure Jenkins to produce build numbers in a format like yyyy.mm.dd.hhmmss (2020.03.25.145344), install the following plugins: - Build Name and Description Setter. - Date Parameter Plugin. Once the plugins are installed, configure your Jenkins project to be parameterized by navigating to the **General** tab and checking the `This project is parameterized` checkbox. Then use the Date parameter to create some parameters: - Date parameter - **Name**: Year - **Date Format**: yyyy - **Default Value**: LocalDate.now(); - Date parameter - **Name**: Day - **Date Format**: dd - **Default Value**: LocalDate.now(); - Date parameter - **Name**: Month - **Date Format**: MM - **Default Value**: LocalDate.now(); :::figure ![An image showing the Jenkins' date parameters](/docs/img/deployments/databases/mysql-flyway/images/jenkins-build-date-parameters.png) ::: Lastly, set the build name in the **Build Environment** section, by checking the `Set Build Name` checkbox and adding the build name, for instance: `${Year}.${Month}.${Day}.${Time}` ## Add a push step Add an Octopus Deploy Push step to your build by navigating to the **Build** tab, click the **Add build step** drop-down list and select **Octopus Deploy: Push packages**, and complete the following fields: - **Octopus Deploy Server**: The values for the drop-down for this come from the Jenkins server configuration. To configure this, navigate to **Jenkins home screen ➜ Manage Jenkins ➜ Configure System**, and then scroll down to the **Octopus Deploy Plugin** section: - **Space**: Select the space to deploy to. You can leave this blank for the Default space - **Package paths**: `/*.nupkg` - **Overwrite mode**: Fail if exists. Those are the only two steps that are needed to package and push a Flyway project to Octopus Deploy. After saving, click on **Build with Parameters**. The generated Date parameters will display. Click **Build** to continue: :::figure ![The generated date parameters](/docs/img/deployments/databases/mysql-flyway/images/jenkins-build-parameters.png) ::: When the build is complete, you should have something like this: :::figure ![Jenkins console output](/docs/img/deployments/databases/mysql-flyway/images/jenkins-build-success.png) ::: Now that the build is complete, it's time to configure the Octopus Deploy project. ## Octopus Deploy From the Octopus Web Portal, navigate to the **Projects** tab: :::figure ![The Octopus project tab](/docs/img/deployments/databases/mysql-flyway/images/octopus-projects.png) ::: Select the **Project Group** and click the **ADD PROJECT** button. Give the project a unique name, a description, select the **Project Group** and the **Lifecycle**. If you've clicked on the **ADD PROJECT** button on a specific project group, this selection will be pre-populated. ### Variables In the new project, click **Variables** to configure the following variables: - `Project.MySql.Database.Name`: The name of the database. - `Project.MySql.Database.Server.Name`: The name or IP address of the database server. - `Project.MySql.Database.Server.Port`: The port that MySql is listening on. - `Project.MySql.Database.Admin.User.Name`: The user account with elevated permissions on the database. - `Project.MySql.Database.Admin.User.Password`: The password for the user account. - `Project.MySql.ConnectionString`: `jdbc:mysql://#{Project.MySql.Database.Server.Name}:#{Project.MySql.Database.Server.Port}/#{Project.MySql.Database.Name}?useUnicode=true`. :::figure ![Variables defined in the Octopus Web Portal](/docs/img/deployments/databases/mysql-flyway/images/octopus-project-variables-defined.png) ::: ### Deployment process With variables defined, we can use them in the deployment process. Click on the **Process** tab, and **ADD STEP**. Filter the steps by entering `flyway` into the search box. #### Flyway info from a referenced package This template will compare the scripts in the scripts directory against the ones that have already been run and display the status of each script using a package parameter. This template is available for both PowerShell and Bash. #### Flyway migrate This template performs the Flyway migrate command and applies any scripts that haven't been run to the database and records which ones were applied so they won't be run again. It also includes the ability to run Redgate SQLCompare to run a drift check. This template is available for both PowerShell and Bash. #### Flyway migrate from a referenced package This template is similar to the Flyway migrate step but uses a package parameter instead of a feed ID and package ID. This is only available in PowerShell at this time. ## Configure the step Choose the **Flyway Info from a Referenced Package** for whichever OS you intend to deploy. This guide uses the Bash version for use with Linux Tentacles: Fill in the fields: - **Relative path to flyway.cmd (optional)**: Use if your flyway bash file isn't within the root of the package. - **Locations (relative path, optional)**: Use if your `sql` directory is not off the root directory. - **Target -url (required)**: Connection string to MySql - `#{Project.MySql.ConnectionString}`. - **Target -user (required)**: User account with elevated rights - `#{Project.MySql.Database.Admin.User.Name}`. - **Target -password (required)**: Password for the user account - `#{Project.MySql.Database.Admin.User.Password}`. - **Flyway package**: The package for deployment. Add a `Manual Intervention` step and scope it to the **Production** environment. This will pause the deployment so you can review what will be executed and determine whether or not to proceed when deploying to **Production**. :::figure ![A manual intervention step in Octopus Deploy](/docs/img/deployments/databases/mysql-flyway/images/octopus-project-manual-intervention.png) ::: Add the **Flyway Migrate** step. The fields for this are identical to the **Flyway Info** step that was added previously: - **Relative path to flyway.cmd (optional)**: Use if your flyway bash file isn't within the root of the package. - **Locations (relative path, optional)**: Use if your `sql` directory is not off the root directory. - **Target -url (required)**: Connection string to MySql - `#{Project.MySql.ConnectionString}`. - **Target -user (required)**: User account with elevated rights - `#{Project.MySql.Database.Admin.User.Name}`. - **Target -password (required)**: Password for the user account - `#{Project.MySql.Database.Admin.User.Password}`. - **Run pre-deploy drift check**: Used if you have Redgate SQLCompare. - **Path to Redgate comparison tool (required for drift-check)**: Path to the SQLCompare executable. - **Shadow -url (required for drift-check)**: Connection string to shadow database. - **Shadow -user (required for drift-check)**: Shadow database user account. - **Shadow -password (required for drift-check)**: Password for shadow database user. - **Flyway package**: The package to deploy. When complete, the deployment process will look like this: :::figure ![The complete deployment process in Octopus Deploy](/docs/img/deployments/databases/mysql-flyway/images/octopus-project-process.png) ::: ### Creating the release With the deployment process defined, the project can create a release for deployment. Click **CREATE RELEASE** and click **SAVE**. With the release created, click **DEPLOY TO...** and select the environment, then click **DEPLOY**. ### Troubleshooting If you receive an error message like the following: ``` /etc/octopus/default/Work/20200326224917-19880-127/FlyWayPackage/flyway: line 17: $'\r': command not found /etc/octopus/default/Work/20200326224917-19880-127/FlyWayPackage/flyway: line 20: syntax error near unexpected token `$'in\r'' /etc/octopus/default/Work/20200326224917-19880-127/FlyWayPackage/flyway: line 20: ` case "`uname`" in ``` Your build server has converted line endings from LF to CRLF. This typically happens on Windows-based build servers. Workarounds are: - Run the following command on your build agent `git config --global core.eol lf` - Set the `text eol=lf` setting within the `.gitattributes` of the git repo # Google Cloud Source: https://octopus.com/docs/deployments/google-cloud.md Google Cloud Platform (GCP) is a leading provider of cloud computing services and infrastructure, including hosted virtual machines, Kubernetes clusters, and serverless environments. Building and shipping systems to Google cloud has its challenges. Different teams have different processes and there's a raft of application and infrastructure artifacts to manage. :::figure ![Google Cloud Platform accounts in Octopus](/docs/img/deployments/google-cloud/centralized-google-cloud-accounts.png) ::: Octopus makes it easier to ship to Google cloud by helping you to: * Connect and authenticate with GCP via a [dedicated account type](/docs/infrastructure/accounts/google-cloud). This allows you to centralize and secure your GCP authentication and use it in your deployments and runbooks. * Use [gcloud](https://cloud.google.com/sdk/gcloud), the GCP command-line tool, in custom scripts out-of-the-box with the [**Run gcloud in a Script** step](/docs/deployments/google-cloud/run-gcloud-script). This step can be used to execute scripts on targets within Google Cloud Platform. * Create and tear down GCP infrastructure with [Terraform](/docs/deployments/terraform). * Access Docker images hosted with [Google Container Registry (GCR)](/docs/packaging-applications/package-repositories/guides/container-registries/google-container-registry). * Deploy, scale and manage containerized applications on GCP with Octopus and [Kubernetes](/docs/deployments/kubernetes). :::div{.hint} **Where do Google cloud Steps execute?** All Google cloud steps execute on a worker. By default, that will be the built-in worker in the Octopus Server. Learn about [workers](/docs/infrastructure/workers) and the different configuration options. ::: ## Learn more - How to use the [Run gcloud in a Script](/docs/deployments/google-cloud/run-gcloud-script) step - How to create [Google cloud accounts](/docs/infrastructure/accounts/google-cloud) - [Google cloud blog posts](https://octopus.com/blog/search?q=google) # Planning changes made by Terraform templates Source: https://octopus.com/docs/deployments/terraform/plan-terraform.md The Terraform [plan command](https://www.terraform.io/cli/commands/plan) is used to identify changes that would be executed if a template was applied or destroyed. This information is useful to confirm the intended changes before they are executed. Octopus has two steps that generate plan information: - `Plan to apply a Terraform template` and - `Plan a Terraform destroy` As their names suggest, the `Plan to apply a Terraform template` step will generate a plan for the result of running `apply` on the template, while the `Plan a Terraform destroy` step will generate a plan for the result of running `destroy` on the template. :::figure ![Octopus Steps](/docs/img/deployments/terraform/plan-terraform/images/octopus-terraform-plan-step.png) ::: ## Step options The planning steps offer the [same base configuration as the other built-in Terraform steps](/docs/deployments/terraform/working-with-built-in-steps). You can refer to the documentation for those steps for more details on the options for the plan steps. :::div{.warning} The plan steps do not support saving the plan to a file and applying that file at a later date. This means the plan information only makes sense when the same values are used in the plan and apply/destroy steps. Configuring shared variables for the step fields ensures that the same values will be used. ::: ## Plan output format Terraform planning steps can output the plan details in either plain text or JSON. ### Plain text output When a plan steps is run, the output will include a line that looks like this: ``` Saving variable "Octopus.Action[Plan Apply].Output.TerraformPlanOutput" with the details of the plan ``` This log message indicates the output variable that was created with the plan text (the name of the step, `Plan Apply` in this case, will reflect the name you assigned to the plan step). ### JSON output Selecting the **JSON output** option configures Terraform to generate JSON output for any planning steps. Each JSON blob is captured in a variable like `Octopus.Action[Plan Apply].Output.TerraformPlanLine[#].JSON`, with `#` replaced by a number. This variable format can be used with Octostache loops: ```powershell #{each output in Octopus.Action[Plan Apply].Output.TerraformPlanLine} Write-Host 'JSON Output line #{output}: #{output.JSON}' #{/each} ``` The resource change counts are captured in the following variables: * `Octopus.Action[Plan Apply].Output.TerraformPlanJsonAdd` * `Octopus.Action[Plan Apply].Output.TerraformPlanJsonRemove` * `Octopus.Action[Plan Apply].Output.TerraformPlanJsonChange` ## Manual intervention The result of a plan will typically be displayed in a Manual Intervention step. Because the plan text can contain markdown characters, the variable should be wrapped up in back ticks to display it verbatim. ```` ``` #{Octopus.Action[Plan Apply].Output.TerraformPlanOutput} ``` ```` :::figure ![Terraform manual intervention](/docs/img/deployments/terraform/plan-terraform/images/terraform-manual-intervention.png) ::: When run as part of a deployment, the plan output will be displayed like the image below. :::figure ![Manual Intervention Message](/docs/img/deployments/terraform/plan-terraform/images/manual-intervention-message.png) ::: ## Advanced options section You can optionally control how Terraform downloads plugins and where the plugins will be located in the `Advanced Options` section. - The `Terraform workspace` field can optionally be set to the desired workspace. If the workspace does not exist it will be created and selected, and if it does it exist it will be selected. - The `Terraform plugin cache directory` can be optional set to a directory where Terraform will look for existing plugins, and optionally download new plugins into. By default, this directory is not shared between targets, so additional plugins have to be downloaded by all targets. By setting this value to a shared location, the plugins can be downloaded once and shared amongst all targets. - The `Allow additional plugin downloads` option can be checked to allow Terraform to download missing plugins, and unchecked to prevent these downloads. - The `Custom terraform init parameters` option can be optionally set to include any parameters to pass to the `terraform init` action. - The `Custom terraform plan parameters` option can be optionally set to include any parameters to pass to the `terraform plan` action. ![Terraform Advanced Options](/docs/img/deployments/terraform/images/terraform-advanced.png) # Defining the deployment process in Octopus Source: https://octopus.com/docs/getting-started/first-deployment/legacy-guide/2022/define-the-deployment-process.md [Getting Started - Deployment Process](https://www.youtube.com/watch?v=0oWRg_TxWxM) The deployment process is the steps the Octopus Server orchestrates to deploy your software. For our simple hello world script, we will only have one step. :::figure ![The Hello world deployment process](/docs/img/getting-started/first-deployment/legacy-guide/images/deployment-process.png) ::: 1. From the *Hello world* project you created on the previous page, click **DEFINE YOUR DEPLOYMENT PROCESS**. 1. Click **ADD STEP**. 1. Select the **Script** tile to filter the types of steps. 1. Scroll down and click **ADD** on the **Run a Script** tile. 1. Accept the default name for the script and leave the **Enabled** check-box ticked. 1. In the **Execution Location** section, select **Run once on a worker** (if you are on self-hosted Octopus, select **Run once on the Octopus Server**). If you are using Octopus Cloud and want to use Bash scripts change the worker pool from **Default Worker Pool** to **Hosted Ubuntu**. 1. Scroll down to the **Script**, select your script language of choice, and enter the following script in the **Inline Source Code** section:
PowerShell ```powershell Write-Host "Hello, World!" ```
Bash ```bash echo "Hello, World!" ```
:::div{.hint} If you are using Octopus Cloud, Bash scripts require you to select the **Hosted Ubuntu** worker pool. The **Default Worker Pool** is running Windows and doesn't have Bash installed. ::: 8. Click **SAVE**. The next step will [create a release and deploy it](/docs/getting-started/first-deployment/legacy-guide/2022/create-and-deploy-a-release). **Further Reading** For further reading on defining a deployment process in Octopus Deploy please see: - [Deployment Process Documentation](/docs/projects/deployment-process) - [Deployment Documentation](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # Create a runbook Source: https://octopus.com/docs/getting-started/first-runbook-run/create-a-runbook.md A single Octopus Deploy Project can have multiple Runbooks. Each Runbook has a unique runbook process, retention policy, and allowable environments to run in. For example, a project might have a runbook to spin up additional infrastructure, or restart the server, or perform a daily backup. :::figure ![example runbook](/docs/img/getting-started/first-runbook-run/images/runbook-overview.png) ::: 1. From the *Hello world* project you created on the previous page, click **OPERATIONS** on the left menu to expand it (if it is not already expanded). 1. Click **GO TO RUNBOOKS**. 1. Click **ADD RUNBOOK**. 1. Give the Runbook a name, for example, *Hello Runbook* and click **SAVE**. The next step will [define a simple runbook process](/docs/getting-started/first-runbook-run/define-the-runbook-process) to run on either the Octopus Server or a worker (if you are using Octopus Cloud). **Further Reading** For further reading on Runbooks please see: - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # Accounts Source: https://octopus.com/docs/infrastructure/accounts.md You may need to configure accounts to use in conjunction with your infrastructure during your deployments. You can configure the following accounts: - [Azure accounts](/docs/infrastructure/accounts/azure). - [AWS accounts](/docs/infrastructure/accounts/aws) - [Google cloud accounts](/docs/infrastructure/accounts/google-cloud) - [OpenID Connect](/docs/infrastructure/accounts/openid-connect) - [SSH Key Pairs](/docs/infrastructure/accounts/ssh-key-pair). - [Tokens](/docs/infrastructure/accounts/tokens) - [Username and Password accounts](/docs/infrastructure/accounts/username-and-password) # Google cloud accounts Source: https://octopus.com/docs/infrastructure/accounts/google-cloud.md :::div{.hint} Google Cloud Accounts were added in Octopus **2021.2**, Generic OpenId Connect Accounts were added in **2025.1** ::: To deploy infrastructure to Google Cloud Platform, you can define a Google cloud or Generic OpenId Connect account in Octopus. The Generic OpenId Connect Account generates a JWT that can be used for [OpenID Connect](/docs/infrastructure/accounts/openid-connect) authentication. The Google cloud account uses the JSON key file credentials that can be retrieved from the service account assigned to the instance that is executing the deployment. ## Generic OpenId Connect Account Google Cloud steps can use a Generic OpenId Connect Account for authentication. 1. Navigate to **Deploy ➜ Manage ➜ Accounts**, click the **ADD ACCOUNT** and select **Generic Oidc Account**. 1. Add a memorable name for the account. 1. Set the [Deployments and Runbooks](/docs/infrastructure/accounts/openid-connect#subject-key-parts) subject generator 1. set an audience, this should match the audience set on the Workload Identity Federation. By default, this is `https://iam.googleapis.com/projects/{project-id}/locations/global/workloadIdentityPools/{pool-id}/providers/{provider-id}` 1. Click the **SAVE**, to test the account set it as the account on a gcloud script step. See the [Google cloud documentation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) for instructions on creating and configuring a Workload Identity Federation. Behind the scenes Octopus calls the gcloud cli with the following command to authenticate: ```bash gcloud iam workload-identity-pools create-cred-config \ \ --service-account= \ --service-account-token-lifetime-seconds=3600 \ --output-file= \ --credential-source-file= \ --credential-source-type=text \ --subject-token-type=urn:ietf:params:oauth:token-type:jwt \ --app-id-uri= ``` :::div{.hint} The default audience format is `https://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID` while `workload-identity-pools create-cred-config` command expects the audience without `https://iam.googleapis.com`. In this scenario Octopus expects the full audience value to be set on the account including `https://iam.googleapis.com` but will trim the `https://iam.googleapis.com` when running the create-cred-config command. ::: ## Create a Google cloud account Google Cloud steps can use a Google Cloud Account for authentication. 1. Navigate to **Deploy ➜ Manage ➜ Accounts**, click the **ADD ACCOUNT** and select **Google Cloud Account**. 1. Add a memorable name for the account. 1. Provide a description for the account. 1. Upload the JSON key file. See the [Google cloud documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for instructions to create a service account and download the key file. 5. Click the **SAVE AND TEST** to save the account and verify the credentials are valid. :::div{.hint} Google Cloud steps can also defer to the service account assigned to the instance/virtual machine that hosts the Octopus Tentacles for authentication. In this scenario there is no need to create a Google Cloud account in Octopus Deploy. ::: ## Google cloud account variables You can access your Google cloud account from within projects through a variable of type **Google Cloud Account Variable**. Learn more about [Google Cloud Account Variables](/docs/projects/variables/google-cloud-account-variables) ## Learn more - How to use the [Run gcloud in a Script](/docs/deployments/google-cloud/run-gcloud-script) step # Create Azure Service Fabric target command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/azure-service-fabric-target.md ## Azure Service Fabric Command: **_New-OctopusAzureServiceFabricTarget_** | Parameter | Value | | ----------------------------------- | ------------------------------------------------- | | `-name` | Name for the Octopus deployment target | | `-azureConnectionEndpoint` | Connection endpoint for the Service Fabric Cluster | | `-azureSecurityMode` | Security mode, use one of the aliases in the table below | | `-azureCertificateThumbprint` | Certificate thumbprint of the Azure Certificate | | `-azureActiveDirectoryUsername` | Username for accessing the Service Fabric Cluster | | `-azureActiveDirectoryPassword` | Password for accessing the Service Fabric Cluster | | `-certificateStoreLocation` | (Optional) Override the default certificate store location | | `-certificateStoreName` | (Optional) Override the default certificate store name | | `-octopusCertificateIdOrName` | Name or Id of the Certificate Resource in Octopus | | `-octopusRoles` | Comma separated list of [target tags](/docs/infrastructure/deployment-targets/target-tags) to assign | | `-updateIfExisting` | Will update an existing Service Fabric target with the same name, create if it doesn't exist | | `-octopusDefaultWorkerPoolIdOrName` | Name or Id of the Worker Pool for the deployment target to use. (Optional). Added in 2020.6.0. | _Security Mode Options_ | Mode | Aliases | | --- | --- | | Unsecure | `unsecure` | | Secure Client Certificate | `certificate` `clientcertificate` `secureclientcertificate` | | Secure Azure Active Directory | `aad` `azureactivedirectory`| Examples: ```powershell # Unsecure New-OctopusAzureServiceFabricTarget -name "My Service Fabric Target 1" ` -azureConnectionEndpoint "connectionEndpoint" ` -azureSecurityMode "unsecure" ` -octopusRoles "ServiceFabricTargetTag" ` -updateIfExisting # Client Certificate New-OctopusAzureServiceFabricTarget -name "My Service Fabric Target 2" ` -azureConnectionEndpoint "connectionEndpoint" ` -azureSecurityMode "certificate" ` -azureCertificateThumbprint "1234567890" ` -octopusCertificateIdOrName "My Service Fabric Certificate" ` -octopusRoles "ServiceFabricTargetTag" # Client Certificate overriding certificate store New-OctopusAzureServiceFabricTarget -name "My Service Fabric Target 3" ` -azureConnectionEndpoint "https://localhost" ` -azureSecurityMode "certificate" ` -azureCertificateThumbprint "1234" ` -certificateStoreLocation "Custom Store Location" ` -certificateStoreName "My Store Name" ` -octopusCertificateIdOrName "cert" ` -octopusRoles "ServiceFabricTargetTag" # Azure Active Directory New-OctopusAzureServiceFabricTarget -name "My Service Fabric Target 4" ` -azureConnectionEndpoint "connectionEndpoint" ` -azureSecurityMode "azureactivedirectory" ` -azureCertificateThumbprint "1234567890" ` -octopusCertificateIdOrName "cert" ` -octopusRoles "ServiceFabricTargetTag" ``` :::div{.hint} If your process creates dynamic deployment targets from a script, and then deploys to those targets in a subsequent step, make sure you add a full [health check](/docs/projects/built-in-step-templates/health-check) step for the role of the newly created targets after the step that creates and registers the targets. This allows Octopus to ensure the new targets are ready for deployment by staging packages required by subsequent steps that perform the deployment. ::: # Linux targets Source: https://octopus.com/docs/infrastructure/deployment-targets/linux.md Linux servers can be configured as [deployment targets](/docs/infrastructure/deployment-targets) in Octopus. The Octopus Server can communicate with Linux targets in two ways: - Using the [Linux Tentacle](/docs/infrastructure/deployment-targets/tentacle/linux). - Over SSH using an [SSH target](/docs/infrastructure/deployment-targets/linux/ssh-target). When using SSH for deployments to a Linux server, the Tentacle agent is not required and doesn't need to be installed. :::div{.success} The Linux Tentacle is the recommended way to configure your server as a deployment target. This allows you to secure the SSH port on your servers. If you operate in a highly secure environment, where it's not possible to open an inbound TCP port for Tentacle (`10933` by default), you can configure the Linux Tentacle in [Polling mode](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles). ::: ## Dependencies - The `$HOME` environment variable must be available. - `bash` 3+ is available at `/bin/bash` - `tar` is available. This is used to unpack Calamari. - `base64` is available. This is used for encoding and decoding variables. - `grep` is available. There are additional dependency requirements to be aware of for both [SSH targets](/docs/infrastructure/deployment-targets/linux/ssh-requirements) and [Linux Tentacle](/docs/infrastructure/deployment-targets/tentacle/linux/#requirements). These dependencies are not required if exclusively using [Raw Scripts](https://octopus.com/docs/deployments/custom-scripts/raw-scripting). ## Supported distributions Since tooling used to invoke Octopus workloads is based on .NET 8, Octopus Server supports running workloads on the following distributions as per the [.NET 8 supported platform details](https://github.com/dotnet/core/blob/main/release-notes/8.0/supported-os.md#linux). | OS | Versions | | ------------------------------------------------------ | ------------------- | | [CentOS Stream](https://centos.org/) | 10, 9 | | [Debian](https://www.debian.org/) | 13, 12 | | [Fedora](https://fedoraproject.org/) | 43, 42 | | [openSUSE Leap](https://www.opensuse.org/) | 16.0, 15.6 | | [Red Hat Enterprise Linux](https://access.redhat.com/) | 10, 9, 8 | | [SUSE Enterprise Linux](https://www.suse.com/) | 16.0, 15.7 | | [Ubuntu](https://ubuntu.com/) | 25.10, 24.04, 22.04 | Although the tooling requires the platform to support .NET Core, since it runs as a self-contained .NET deployment there is no .NET *installation* prerequisite. In addition to the .NET 8 requirement, Octopus will only support those Operating Systems which are themselves still considered as supported by the platform vendors themselves. ## Learn more - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # SSH deployments Source: https://octopus.com/docs/infrastructure/deployment-targets/linux/ssh-deployments.md Below are some details of how deployments are performed to SSH targets. ## Features The vast majority of Octopus features are supported when deploying to SSH targets. Keep in mind the platform differences such as file paths or separators, line breaks, environment variables and security considerations. Windows-specific features such as IIS and Windows Services are not supported when deploying to non-Windows targets. ## Scripts You can execute scripts using almost any installed scripting runtime. Learn about what you can do with [custom scripts](/docs/deployments/custom-scripts). :::div{.hint} **Environment Variable Differences** If you are writing a cross-platform script, be aware of the differences between environment variables for each platform. For example the Windows based variable `env:USERNAME` roughly correlates to `env:USER` on an Ubuntu machine however `env:ProgramFiles(x86)` has no corollary. ::: :::div{.hint} **Bash (and other shell) variables** Octopus Deploy will log into the SSH target via a non-interactive shell. Because of this, startup files like `.bashrc` are not fully evaluated. If you are referencing bash variables `export`ed in these files, you should move them before the following common code block at the top of the file: ```bash # If not running interactively, don't do anything case $- in *i*) ;; *) return;; esac ``` This will ensure that they are evaluated on non-interactive logins. ::: ### Example: Using variables in Bash Your script can use a [variable value](/docs/projects/variables) by invoking the `get_octopusvariable` function. For example, to echo out the installation directory call > `echo "Installed to step: " $(get_octopusvariable "Octopus.Action[Acme Deployment].Output.Package.InstallationDirectoryPath")` You can also set an [output variable](/docs/projects/variables/output-variables): ```bash set_octopusvariable RandomNumber 3 ``` ### Example: Collecting an artifact Your script can tell Octopus to collect a file and store it as a [deployment artifact](/docs/projects/deployment-process/artifacts): ```bash new_octopusartifact "./subdir/another_dir/my_file" ``` which results in the server retrieving that file, at the end of that step. Keep in mind that this means the file must be accessible over SFTP using the same credentials as that used during execution. ## Transport The package and any supporting deployment files are uploaded via SFTP. ## File footprint - The root directory for all Octopus work is `$HOME/.octopus` - All packages are deployed to a relative location at `$HOME/.octopus/Applications/#{instance}/#{environment}/#{package}/#{version}`. - When Calamari is copied across by a deployment it is extracted to `$HOME/.octopus/#{instance}/Calamari/#{version}`. By making all paths relative to the user's home directory, you can then theoretically use the same physical machine with multiple user accounts acting as separate targets. The Octopus Server can then treat each machine\user as a separate SSH endpoint which will update Calamari and deploy independently of each other. ## Package acquisition Leveraging Calamari means that the deployment can obtain the package via the same methods as a target running the Tentacle agent; either pushed from the server or directly from a supported [external repository](/docs/packaging-applications/package-repositories). There is therefore no bottleneck in acquisition if there are multiple SSH endpoints all trying to retrieve the same package independently. ## Calamari Calamari is the tool Octopus uses to execute deployments on a remote computer. Before any processing is begun we do an initial check to ensure the available Calamari executable on the endpoint is up to date with the server. If not, we push up the latest Calamari package and then recommence the task. The Calamari package is sent as a `.tar.gz` so it can be extracted with minimal dependencies. This means the server needs to be able to un-tar that package, however, this should be available by default in most distributions. ## Learn more - [Linux blog posts](https://octopus.com/blog/tag/linux/1) - [Node.js sample](/docs/deployments/node-js/node-on-linux) # Tentacle communication modes Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/tentacle-communication.md Octopus and Tentacles can be configured to communicate in two different ways depending on your network setup. The mode you are using will change the installation process slightly. ## Listening Tentacles (recommended) In **listening** mode, Tentacle will *listen* on a TCP port (**10933** by default). When a package needs to be deployed, Octopus connects to the Tentacle service on that port. In listening mode Tentacle is the TCP server, and Octopus is the TCP client. :::figure ![Octopus to Listening Tentacle communication](/docs/img/infrastructure/deployment-targets/tentacle/images/listening-tentacle.png) ::: When choosing a communication mode, we recommend Listening mode when possible. Listening mode uses the least resources (listening on a TCP port is cheaper than actively trying to connect to one). It also gives you the most control (you can use rules in your firewall to limit which IP addresses can connect to the port). [Octopus and Tentacle use SSL when communicating](/docs/security/octopus-tentacle-communication), and Tentacle will outright reject connections that aren't from an Octopus Server that it trusts, identified by an X.509 certificate public key that you provide during setup. To install and configure Tentacles in listening mode, see either: - The [Windows Listening Tentacle installation docs](/docs/infrastructure/deployment-targets/tentacle/windows/#configure-a-listening-tentacle-recommended). - The [Linux Tentacle Automation scripts](/docs/infrastructure/deployment-targets/tentacle/linux/#automation-scripts), selecting the tab for either a Listening deployment target or worker for your Linux distro. ## Polling Tentacles In **polling** mode, Tentacle will *poll* the Octopus Server periodically, connecting over a TCP port (**10943** by default) to check if there are any tasks for it to perform. Polling mode is the opposite of **Listening mode**. For self-hosted, the port Octopus Server uses can be [changed from the command line](/docs/octopus-rest-api/octopus.server.exe-command-line/configure/) using the `--commsListenPort` option. For [Octopus Cloud](/docs/octopus-cloud/), port 443 can be specified when [registering the Tentacle with the command line](/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443) `--server-comms-address` option. In polling mode, Octopus is the TCP server, and Tentacle is the TCP client. :::figure ![Polling Tentacle to Octopus communication](/docs/img/infrastructure/deployment-targets/tentacle/images/polling-tentacle.png) ::: The advantage to Polling mode is that you don't need to make any firewall changes on the Tentacle side; you only need to allow access to a port on the Octopus Server. The disadvantage is that it also uses more resources on the Tentacle side, since Tentacle needs to poll periodically even if there aren't any jobs for it to perform. Polling mode is good for scenarios that involve Tentacles being behind NAT or a dynamic IP address. A good example might be servers at branch offices or a chain of retail stores, where the IP address of each server running Tentacle changes. To install and configure Tentacles in polling mode, see either: - The [Windows Polling Tentacle installation docs](/docs/infrastructure/deployment-targets/tentacle/windows#configure-a-polling-tentacle). - The [Linux Tentacle Automation scripts](/docs/infrastructure/deployment-targets/tentacle/linux#automation-scripts), selecting the tab for either a Polling deployment target or worker for your Linux distro. ## SSL offloading is not supported The communication protocol used by Octopus and Tentacle requires intact end-to-end TLS connection for message encryption, tamper-proofing, and authentication. For this reason SSL offloading is not supported. ## Proxy servers supported for Tentacle communications The communication protocol used by Octopus and Tentacle 3.4 and above supports proxies. Read more about configuring proxy servers for Tentacle communications in [proxy support](/docs/infrastructure/deployment-targets/proxy-support). # Workers Source: https://octopus.com/docs/infrastructure/workers.md [Getting Started - Worker and Worker Pools](https://www.youtube.com/watch?v=v6621BId7fE) Workers are machines that can execute tasks that don't need to be run on the Octopus Server or individual deployment targets. You can manage your workers by navigating to **Infrastructure ➜ Worker Pools** in the Octopus Web Portal: :::figure ![The Worker Pools area of Octopus Deploy](/docs/img/shared-content/concepts/images/worker-pools.png) ::: Workers are useful for the following scenarios: - Publishing to Azure websites. - Deploying AWS CloudFormation templates. - Deploying to AWS Elastic Beanstalk. - Uploading files to Amazon S3. - Backing up databases. - Performing database schema migrations - Configuring load balancers. ![Workers diagram](/docs/img/shared-content/concepts/images/workers-diagram-img.png) ## Where steps run \{#where-steps-run} The following step types and configurations run on a worker: - Any step that runs a script (usually user supplied) or has a package that has an execution location of `Octopus Server`, `Octopus Server on behalf of target tags`, `Worker Pool` or `Worker Pool on behalf of target tags`. - Any steps that run on a Cloud Region, an Azure Target, or any target that isn't a Tentacle, an SSH Target, or an Offline Drop. - All AWS, Terraform, and Azure steps. The following steps always run inside the Octopus Server process (and do not run user-supplied code): - Email - Manual intervention - Import certificate A worker receives instruction from the Octopus Server to execute a step, it executes the step using Calamari and returns the logs and any collected artifacts to the Octopus Server. :::div{.hint} Workers are assigned at the start of a deployment or runbook, not at the time the individual step executes. ::: There are three kinds of workers you can use in Octopus: 1. [The built-in worker](#built-in-worker) - available on self-hosted Octopus 1. [Dynamic workers](#dynamic-workers) - available on Octopus Cloud 1. [External workers](#external-workers) - manually configured :::div{.success} [Octopus Cloud](/docs/octopus-cloud) uses [dynamic workers](/docs/infrastructure/workers/dynamic-worker-pools) by default, which provides an on-demand worker running on an Ubuntu or Windows VM. Dynamic workers are managed by Octopus Cloud, and are included with your Octopus Cloud subscription. ::: ## Ignoring Workers \{#ignoring-workers} Octopus works out-of-the-box without setting up workers. You can run all deployment processes, run script steps on the built-in worker, deploy to Azure and run AWS and Terraform steps, without further setup. The built-in worker is available in a default Octopus set up, and Octopus workers are designed so that, if you aren't using external workers, none of your deployment processes need to be worker aware. The choices of built-in worker, built-in worker running in a separate account, and external workers enable to you harden your Octopus Server and scale your deployments. ## Migrating to Workers \{#migrating-to-workers} Octopus workers provide a way to move work off the built-in worker. This lets you move the work away from the Octopus Server and onto external workers without the need to update the deployment process. Learn [how to use the default worker pool to move steps off the Octopus Server](/docs/infrastructure/workers/worker-pools/#default-worker-pool). ## Built-in Worker \{#built-in-worker} The Octopus Server has a built-in worker that can deploy packages, execute scripts, and perform tasks that don't need to be performed on a deployment target. The built-in worker is configured by default, however, the built-in worker can be disabled by navigating to **Configuration** and selecting **Disable** for the **Run steps on the Octopus Server** option. The **built-in worker** is executed on the same machine as the Octopus Server. When the built-in worker is needed to execute a step, the Octopus Server spawns a new process and runs the step using Calamari. The spawned process is either under the server's security context (default) or under a [context configured for the built-in worker](/docs/infrastructure/workers/built-in-worker/#Running-tasks-on-the-Octopus-Server-as-a-different-user). Adding a worker to the default worker pool will disable the built-in worker, and steps will no longer run on the Octopus Server. Learn about the security implications and how to configure the [built-in worker](/docs/infrastructure/workers/built-in-worker). :::div{.hint} The built-in worker is only available on [self-hosted Octopus](/docs/getting-started#self-hosted-octopus) instances. [Octopus Cloud](/docs/octopus-cloud) customers have access to [dynamic worker pools](/docs/infrastructure/workers/dynamic-worker-pools), which provides a pre-configured worker on-demand. ::: ## Dynamic Workers \{#dynamic-workers} **Dynamic workers** are on-demand workers managed by [Octopus Cloud](/docs/octopus-cloud), which means you don't need to configure or maintain additional infrastructure. Dynamic workers provides an Ubuntu or Windows VM running as a pre-configured tentacle worker. Dynamic worker pools are included with all Octopus Cloud instances, and are the default option when creating new worker steps in your deployments and runbooks. Learn more about configuring and using [dynamic worker pools](/docs/infrastructure/workers/dynamic-worker-pools) and selecting an OS image for your worker tasks. ## External Workers \{#external-workers} An **External Worker** is either: - A [Windows](/docs/infrastructure/deployment-targets/tentacle/windows/) or [Linux](/docs/infrastructure/deployment-targets/tentacle/linux) Tentacle. - An [SSH machine](/docs/infrastructure/deployment-targets/linux/ssh-target) that has been registered with the Octopus Server as a worker. - A [Kubernetes Worker](/docs/infrastructure/workers/kubernetes-worker) that has been installed in a Kubernetes cluster, and has self-registered with the Octopus Server The setup of a worker is the same as setting up a deployment target as a [Windows Tentacle target](/docs/infrastructure/deployment-targets/tentacle/windows/) or an [SSH target](/docs/infrastructure/deployment-targets/linux/ssh-target), except that instead of being added to an environment, a worker is added to a [worker pool](/docs/infrastructure/workers/worker-pools/). Using external workers allows delegating work to a machine other than the Octopus Server. This can make the server more secure and allow scaling. When Octopus executes a step on an external worker, it's the external worker that executes Calamari; no user-provided script executes on the Octopus Server itself. Workers have machine policies, are health checked, and run Calamari, just like deployment targets. :::div{.success} [Octopus Cloud](/docs/octopus-cloud) customers can choose to use the included [dynamic worker pools](/docs/infrastructure/workers/dynamic-worker-pools) (enabled by default), and/or register their own external workers. ::: ## Registering an External Worker \{#registering-an-external-worker} Once the Tentacle or SSH machine has been configured, workers can be added using the [Web Portal](#registering-workers-in-the-web-portal), the [Octopus Deploy REST API](/docs/octopus-rest-api/), the [Octopus.Clients library](/docs/octopus-rest-api/octopus.client) or with the Tentacle executable. Only a user with the `ConfigureServer` permission can add or edit workers. ### Registering Workers in the Octopus Web Portal \{#registering-workers-in-the-octopus-web-portal} You can register workers from the Octopus Web portal if they are a Windows or Linux [Listening Tentacle](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended) or an [SSH deployment target](/docs/infrastructure/deployment-targets/linux/ssh-target). You can choose between: - [Registering a Windows Listening Tentacle as a Worker](#registering-windows-listening-worker). - [Registering a Linux Listening Tentacle as a Worker](#registering-linux-listening-worker). - [Registering an SSH deployment target as a Worker](#registering-ssh-connection-worker). - [Installing a Kubernetes Worker](#installing-a-kubernetes-worker) After you have saved the new worker, you can navigate to the worker pool you assigned the worker to, to view its status. ### Registering a Windows Listening Tentacle as a Worker \{#registering-windows-listening-worker} Before you can configure your Windows servers as Tentacles, you need to install Tentacle Manager on the machines that you plan to use as Tentacles. Tentacle Manager is the Windows application that configures your Tentacle. Once installed, you can access it from your start menu/start screen. Tentacle Manager can configure Tentacles to use a [proxy](/docs/infrastructure/deployment-targets/proxy-support), delete the Tentacle, and show diagnostic information about the Tentacle. 1. Start the Tentacle installer, accept the license agreement, and follow the prompts. 2. When the Octopus Deploy Tentacle Setup Wizard has completed, click **Finish** to exit the wizard. 3. When the Tentacle Manager launches, click **GET STARTED**. 1. On the communication style screen, select **Listening Tentacle** and click **Next**. 1. In the **Octopus Web Portal**, navigate to the **Infrastructure** tab, select **Workers** and click **ADD WORKER ➜ WINDOWS**, and select **Listening Tentacle**. 1. Copy the **Thumbprint** (the long alphanumerical string). 1. Back on the Tentacle server, accept the default listening port **10933** and paste the **Thumbprint** into the **Octopus Thumbprint** field and click **Next**. 1. Click **INSTALL**, and after the installation has finished click **Finish**. 1. Back in the **Octopus Web Portal**, enter the hostname or IP address of the machine the Tentacle is installed on, i.e., `example.com` or `10.0.1.23`, and click **NEXT**. 1. Add a display name for the Worker (the server where you just installed the Listening Tentacle). 1. Select which [worker pools](/docs/infrastructure/workers/worker-pools) the Worker will be assigned to and click **SAVE**. ### Registering a Linux Listening Tentacle as a Worker \{#registering-linux-listening-worker} The Tentacle agent will need to be installed on the target server to communicate with the Octopus Server. Please read the instructions for [installing a Linux Tentacle](/docs/infrastructure/deployment-targets/tentacle/linux) for more details. 1. In the **Octopus Web Portal**, navigate to the **Infrastructure** tab, select **Workers** and click **ADD WORKER ➜ LINUX**, and select **Listening Tentacle**. 1. Make a note of the **Thumbprint** (the long alphanumerical string). 1. On the Linux Tentacle Server, run `/opt/octopus/tentacle/configure-tentacle.sh` in a terminal window to configure the Tentacle. 1. Give the Tentacle instance a name (default `Tentacle`) and press **Enter**. 1. Choose **1) Listening** for the **kind of Tentacle** to configure, and press **Enter**. 1. Configure the folder to store log files and press **Enter**. 1. Configure the folder to store applications and press **Enter**. 1. Enter the default listening port **10933** to use and press **Enter**. 1. Enter the **Thumbprint** from the Octopus Web Portal and press **Enter**. 1. Review the configuration commands to be run that are displayed, and press **Enter** to install the Tentacle. 1. Back in the **Octopus Web Portal**, enter the hostname or IP address of the machine the Tentacle is installed on, i.e., `example.com` or `10.0.1.23`, and click **NEXT**. 1. Add a display name for the Worker (the server where you just installed the Listening Tentacle). 1. Select which [worker pools](/docs/infrastructure/workers/worker-pools) the Worker will be assigned to and click **SAVE**. ### Registering a Worker with an SSH Connection \{#registering-ssh-connection-worker} 1. In the **Octopus Web Portal**, navigate to the **Infrastructure** tab, select **Workers** and click **ADD WORKER**. 2. Choose either **LINUX** or **MAC** and click **ADD** on the SSH Connection card. 3. Enter the DNS or IP address of the deployment target, i.e., `example.com` or `10.0.1.23`. 4. Enter the port (port 22 by default) and click **NEXT**. Make sure the target server is accessible by the port you specify. The Octopus Server will attempt to perform the required protocol handshakes and obtain the remote endpoint's public key fingerprint automatically rather than have you enter it manually. This fingerprint is stored and verified by the server on all subsequent connections. If this discovery process is not successful, you will need to click **ENTER DETAILS MANUALLY**. 5. Add a display name for the Worker. 6. Select which [worker pools](/docs/infrastructure/workers/worker-pools) the Worker will be assigned to. 7. Select the account that will be used for the Octopus Server and the SSH target to communicate. 8. If entering the details manually, enter the **Host**, **Port** and the host's fingerprint. :::div{.hint} From Octopus Server **2024.2.6856** both SHA256 and MD5 fingerprints are supported. We recommend using SHA256 fingerprints. ::: You can retrieve the fingerprint of the default key configured in your sshd\_config file from the target server with the following command: ```bash ssh-keygen -E sha256 -lf /etc/ssh/ssh_host_ed25519_key.pub | awk '{ print $2 }' ``` For Octopus Server prior to **2024.2.6856** use the following: ```bash ssh-keygen -E md5 -lf /etc/ssh/ssh_host_ed25519_key.pub | awk '{ print $2 }' | cut -d':' -f2- ``` 9. Specify whether Mono is installed on the SSH target or not to determine which version of [Calamari](/docs/octopus-rest-api/calamari) will be installed. - [Calamari on Mono](#mono-calamari) built against the full .NET framework. - [Self-contained version of Calamari](#self-contained-calamari) built against .NET Core. 10. Click **Save**. ### Registering a Windows Polling Tentacle as a Worker \{#registering-windows-polling-worker} Before you can configure your Windows servers as Tentacles, you need to install Tentacle Manager on the machines that you plan to use as Tentacles. Tentacle Manager is the Windows application that configures your Tentacle. Once installed, you can access it from your start menu/start screen. Tentacle Manager can configure Tentacles to use a [proxy](/docs/infrastructure/deployment-targets/proxy-support), delete the Tentacle, and show diagnostic information about the Tentacle. 1. Start the Tentacle installer, accept the license agreement, and follow the prompts. 2. When the Octopus Deploy Tentacle Setup Wizard has completed, click **Finish** to exit the wizard. 3. When the Tentacle Manager launches, click **GET STARTED**. 1. On the communication style screen, select **Polling Tentacle** and click **Next**. 1. If you are using a proxy see [Proxy Support](/docs/infrastructure/deployment-targets/proxy-support), or click **Next**. 1. Add the Octopus credentials the Tentacle will use to connect to the Octopus Server: a. The Octopus URL: the hostname or IP address. b. Select the authentication mode and enter the details: i. The username and password you use to log into Octopus, or: i. Your Octopus API key, see [How to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key). :::div{.hint} The Octopus credentials specified here are only used once to configure the Tentacle. All future communication is performed over a [secure TLS connection using certificates](/docs/security/octopus-tentacle-communication/#Octopus-Tentaclecommunication-Scenario:PollingTentacles). ::: 1. Click **Verify credentials**, and then next. 1. On the machine type screen, select **Worker** and click **Next**. 1. Choose the [Space](/docs/administration/spaces) the Worker will be registered in. 1. Give the machine a meaningful name and select which [worker pool](/docs/infrastructure/workers/worker-pools) the Worker will be assigned to and click **Next**. 1. Click **Install**, and when the script has finished, click **Finish**. The new Polling Tentacle will automatically show up in the Workers list. ### Registering a Linux Polling Tentacle as a Worker \{#registering-linux-polling-worker} The Tentacle agent will need to be installed on the target server to communicate with the Octopus Server. Please read the instructions for [installing a Linux Tentacle](/docs/infrastructure/deployment-targets/tentacle/linux) for more details. 1. On the Linux Tentacle Server, run `/opt/octopus/tentacle/configure-tentacle.sh` in a terminal window to configure the Tentacle. 1. Give the Tentacle instance a name (default `Tentacle`) and press **Enter**. 1. Choose **2) Polling** for the **kind of Tentacle** to configure, and press **Enter**. 1. Configure the folder to store log files and press **Enter**. 1. Configure the folder to store applications and press **Enter**. 1. Enter the **Octopus Server URL** (e.g. https://samples.octopus.app) and press **Enter**. 1. Enter the authentication details the Tentacle will use to connect to the Octopus Server: i. Select **1)** if using an Octopus API key, see [How to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key) or: ii. Select **2)** to provide a username and password you use to log into Octopus 1. Select **2) Worker** for the type of Tentacle to setup and press **Enter**. 1. Give the **Space** you wish to register the Tentacle in and press **Enter**. 1. Provide a name for the Tentacle and press **Enter**. 1. Add which [worker pools](/docs/infrastructure/workers/worker-pools) the Worker will be assigned to (comma separated) and press **Enter**. 1. Review the configuration commands to be run that are displayed, and press **Enter** to install the Tentacle. The new Polling Tentacle will automatically show up in the Workers list. ### Installing a Kubernetes Worker You can install the Kubernetes Worker using [Helm](https://helm.sh/) through the [octopusdeploy/kubernetes-agent](https://github.com/OctopusDeploy/helm-charts/tree/main/charts/kubernetes-agent) chart. This chart is hosted on [Dockerhub](https://hub.docker.com/r/octopusdeploy/kubernetes-agent) and can be pulled directly via the Helm CLI. To make things easier, Octopus provides an installation wizard that generates the Helm command for you to run. :::div{.warning} Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands. You can see the current kubectl config by executing: ```bash kubectl config view ``` ::: 1. In the Octopus Web Portal, navigate to the **Infrastructure** tab, select **Workers**, and click **ADD WORKER** 2. Choose **Kubernetes** and click **ADD** on the Kubernetes Worker card. 3. Enter a **Name** for the worker, and select the **Worker Pools** to which the worker should belong, and select **NEXT** 1. The dialog permits for the inline creation of a worker pool via the **+** button. 2. Click **Show advanced** to provide a custom Storage class or override the Octopus Server URL if required 4. Select the desired shell (bash or PowerShell) and copy and the supplied command 5. Execute the copied command in a terminal configured with your k8s cluster, and click **NEXT** 1. This step is not required if the NFS driver already exists in your cluster (due to prior installs of k8s worker or deployment target) 5. Select the desired shell (bash or PowerShell), then copy the supplied command 6. Execute the copied command in a terminal configured with your k8s cluster. 1. Installing the Helm chart will take some time (potentially minutes depending on infrastructure). 6. A green 'success' bar will appear when the Helm Chart has completed installation, and the worker has registered with the Octopus Server. 7. Click the **View Worker** button to display the settings of the created worker, or **Cancel** to return to the **Add Worker** page :::div{.warning} As the display name is used for the Helm release name, this name must be unique for a given cluster. This means that if you have a Kubernetes agent and Kubernetes worker with the same name (e.g. `production`), then they will clash during installation. If you do want a Kubernetes agent and Kubernetes worker to have the same name, Then prepend the type to the name (e.g. `worker production` and `agent production`) during installation. This will install them with unique Helm release names, avoiding the clash. After installation, the worker & target names can then be changed in the Octopus Server UI to the desired name to remove the prefix. ::: ### Registering Workers with the Tentacle executable \{#registering-workers-with-the-tentacle-executable} Tentacle workers can also register with the server using the Tentacle executable (version 3.22.0 or later), for example: ``` .\Tentacle.exe register-worker --instance MyInstance --server "https://example.com/" --comms-style TentaclePassive --apikey "API-YOUR-KEY" --workerpool "Default Worker Pool" ``` Use `TentacleActive` instead of `TentaclePassive` to register a polling Tentacle worker. The Tentacle executable can also be used to deregister workers, for example: ``` .\Tentacle.exe deregister-worker --instance MyInstance --server "https://example.com/" --apikey "API-YOUR-KEY" ``` :::div{.hint} For information on creating an API key, see [how to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key). ::: ## Recommendations for External Workers \{#recommendations-for-external-workers} We highly recommend setting up external workers on a different machine to the Octopus Server. We also recommend running external workers as a different user account to the Octopus Server. It can be advantageous to have workers on the same local network as the server to reduce package transfer times. Default pools attached to cloud targets allow co-location of workers and targets, this can help make workers specific to your targets as well as making the Octopus Server more secure by using external workers. ## Run Multiple processes on Workers simultaneously \{#run-multiple-processes-on-workers-simultaneously} Many workers may be running in parallel and a single worker can run multiple actions in parallel. The [task cap](/docs/support/increase-the-octopus-server-task-cap/) determines how many tasks (deployments or system tasks) can run simultaneously. The [system variable](/docs/projects/variables/system-variables) `Octopus.Action.MaxParallelism` controls how much parallelism is allowed in executing a deployment action. It applies the same to deployment targets as it does to workers. For example, if `Octopus.Action.MaxParallelism` is set to its default value of 10, any one deployment action will: - Deploy to at most 10 deployment targets simultaneously, or - Have no more than 10 concurrent worker invocations running. Parallel steps in a deployment can each reach their own `MaxParallelism`. Coupled with multiple deployment tasks running, up to the task cap, you can see the number of concurrent worker invocations can grow quickly. External workers and the built-in worker have the same behavior in this regard and in that Workers can run many actions simultaneously and can run actions from different projects simultaneously. Note that this means the execution of an action doesn't have exclusive access to a worker, which could allow one project to access the working folder of another project. Note that if external workers are added to the default pool, then the workload is shared across those workers: a single external worker will be asked to perform exactly the same load as the built-in worker would have been doing, two workers might get half each, etc. ### Run a process on a Worker exclusively \{#run-process-on-worker-exclusively} Sometimes it's not desirable to run multiple deployments or runbooks on a Worker in parallel. Doing so can cause issues if one or more processes try to access a shared resource, such as a file. By default, the [system variable](/docs/projects/variables/system-variables) `OctopusBypassDeploymentMutex` is set to `True` when deploying to a worker . If you want to prevent workers running tasks in parallel you can set `OctopusBypassDeploymentMutex` to `False`. If you need even more finely grained control to a shared resource, we recommend using a [named mutex](https://docs.microsoft.com/en-us/dotnet/api/system.threading.mutex?view=net-5.0) around the process. To learn more about how you can create a named mutex around a process using PowerShell, see this [log file example](https://learn-powershell.net/2014/09/30/using-mutexes-to-write-data-to-the-same-logfile-across-processes-with-powershell/). :::div{.success} You can see how Octopus uses this technique with the built-in IIS step in the [open-source Calamari library](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Scripts/Octopus.Features.IISWebSite_BeforePostDeploy.ps1#L144). ::: ### Workers in HA setups With Octopus High Availability, each node has a task cap and can invoke the built-in worker locally, so for a 4-node HA cluster, there are 4 built-in workers. Therefore if you move to external workers, it's likely you'll need to provision workers to at least match your server nodes, otherwise, you'll be asking each worker to do the sum of what the HA nodes were previously doing. ## Learn more - [Worker blog posts](https://octopus.com/blog/tag/workers/1) - [Worker pool variables](/docs/projects/variables/worker-pool-variables) # Cluster Configuration Source: https://octopus.com/docs/infrastructure/workers/kubernetes-worker/cluster-configuration.md The Kubernetes worker has been proven to be effective on a variety of installations. But some configurations are more complex than others! Three factors affect the likelihood of success: 1. Kubernetes distribution/Managed Service (eg, AKS, EKS, GKE ...) 2. Storage provider type (i.e., the filesystem shared between worker and pods) 3. The Operating System of the Kubernetes nodes When determining the best combination of these for your situation, it may be simplest to start small and iterate. The following table defines known good configurations, though many other configurations are likely to produce a valid system. | Distribution / Managed Servicer | Storage Solution: | Approach | |:-------------------------------:|:-----------------:|----------------------------------------------| | Minikube | NFS | No additional configuration required† | | MicroK8s | NFS | No additional configuration required† | | Kind | NFS | No additional configuration required† | | AKS | NFS | No additional configuration required | | | Azure Files | No additional configuration required | | GKE | NFS | No additional configuration required | | EKS | NFS | No additional configuration required | | | EFS | Requires Octopus Server 2024.3+ | | RKE2 | Longhorn | Requires pre-configured storage‡ | | OpenShift | NFS | Requires specific configuration‡ | _† Recommended for local development or edge usage_ _‡ Please [contact support](https://octopus.com/support) for additional information_ Any Storage class that supports being mounted in [ReadWriteMany](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) mode is likely to satisfy the Kubernetes worker's storage requirements. The Kubernetes worker is compatible with most Ubuntu-based and Amazon Linux nodes. :::div{.warning} The NFS Storage solution cannot be used with [BottleRocket](https://aws.amazon.com/bottlerocket/?amazon-bottlerocket-whats-new.sort-by=item.additionalFields.postDateTime&amazon-bottlerocket-whats-new.sort-order=desc) nodes as [a current issue with SELinux enforcement](https://github.com/bottlerocket-os/bottlerocket/issues/4116) prevents execution from the NFS share. The Kubernetes worker is not compatible with Windows nodes and is currently unable to create script pods based on Windows images. ::: # Azure Files Source: https://octopus.com/docs/installation/file-storage/azure-file-storage.md If your Octopus Server is running in Microsoft Azure we recommend [Azure Files](https://docs.microsoft.com/azure/storage/files/storage-files-introduction) which presents a file share over SMB 3.0 that can be shared across all of your Octopus servers. After you have created your File Share, the best option is to add the Azure File Share as a file share. Follow the instructions from [Microsoft's documentation](https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows#mount-the-azure-file-share). For the purposes of this document, we will use `Z:\` as the drive letter. Note: Use of a drive letter is for demonstration purposes, Azure Files supports UNC paths as well. [Install Octopus](/docs/installation) and then run the following: If the shared folders are going to exist in the same root location, you can use a single command to set the root and it will automatically set the individual components to subfolders within the root ```powershell & 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --clusterShared "Z:\SharedFolder" ``` The resulting folder structure will look something like this: Z:\SharedFolder\Artifacts Z:\SharedFolder\TaskLogs Z:\SharedFolder\Packages Z:\SharedFolder\Imports Z:\SharedFolder\EventExports In the event you need the locations to be in different locations, you have the flexibility to set the paths to the individual items :::div{.hint} It's worth noting that you need to have created the folders within the Azure File Share first before running this step. ::: ```powershell # Set the path & 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --artifacts "Z:\Artifacts" & 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --taskLogs "Z:\TaskLogs" & 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --nugetRepository "Z:\Packages" & 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --imports "Z:\Imports" & 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --eventExports "Z:\EventExports" ``` ## High Availability With Octopus Deploy's [High Availability](/docs/administration/high-availability) functionality, you connect multiple nodes to the same database and file storage. Octopus Server makes specific assumptions about the performance and consistency of the file system when accessing log files, performing log retention, storing deployment packages and other deployment artifacts, exported events, and temporary storage when communicating with Tentacles. What that means is: - Octopus Deploy is sensitive to network latency. It expects the file system to be hosted in the same data center as the virtual machines or container hosts running the Octopus Deploy Service. - It is extremely rare for two or more nodes to write to the same file at the same time. - It is common for two or more nodes to read the same file at the same time. In our experience, you will have the best experience when all the nodes and the file system are located in the same data center. Modern network storage devices and operating systems handle almost all the scenarios a highly available instance of Octopus Deploy will encounter. ## Disaster Recovery For disaster recovery scenarios, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). To achieve this with Azure you have several options available. Further details on the redundancy options available for Azure Storage can be found [here](https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy). ### Zone-redundant Storage Zone-redundant storage (ZRS) will replicate your storage account synchronously across three Azure availability zones in the primary region. This protects against the failure of an entire datacenter within the primary region. The main drawback of this option is that it does not protect against data loss in the event the primary region is destroyed. This can be mitigated through the use of [snapshots](https://learn.microsoft.com/en-us/azure/storage/files/storage-snapshots-files) and [azure file share backup](https://learn.microsoft.com/en-us/azure/backup/azure-file-share-backup-overview?tabs=snapshot). ### Geo-redundant Storage Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using Locally Redundant storage (LRS). It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. The main drawback of this option is that data is only replicated within a single location in the primary region. Any failure of this location would require a failover to the secondary region. :::div{.warning} Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. The Azure Storage platform typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region. ::: ### Geo-zone-redundant Storage Geo-zone-redundant storage (GZRS) combines ZRS and GRS. It will protect against a failure of availability zones within the primary region and a failure of the entire primary region. :::div{.hint} When using GRS or GZRS a failure of the primary region would require a [manual failover](https://learn.microsoft.com/en-us/azure/storage/common/storage-failover-customer-managed-unplanned?tabs=grs-ra-grs) to be triggered within Azure which would update the the Azure Storage DNS entry to point to the secondary region. This process can take up to an hour. ::: # Azure Load Balancers Source: https://octopus.com/docs/installation/load-balancers/azure-load-balancers.md To distribute HTTP load among Octopus Server nodes with a single point of access, we recommended using an HTTP load balancer. Azure has a wide range of [load balancers](https://docs.microsoft.com/azure/architecture/guide/technology-choices/load-balancing-overview) that will work with Octopus in a highly-available configuration: - [Azure Traffic Manager](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview) - [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview) - [Azure Load Balancer](https://docs.microsoft.com/azure/load-balancer/load-balancer-overview) - [Azure Front Door](https://docs.microsoft.com/azure/frontdoor/front-door-overview) - [Kemp LoadMaster](https://kemptechnologies.com/uk/solutions/microsoft-load-balancing/loadmaster-azure/) - [F5 Big-IP Virtual Edition](https://www.f5.com/partners/technology-alliances/microsoft-azure) For disaster recovery scenarios, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). Azure's [recommendation](https://learn.microsoft.com/en-us/azure/reliability/reliability-app-service?tabs=graph%2Ccli#active-passive-architecture) of using Azure Front Door is a great way to achieve this. This method will allow you to easily route traffic to the secondary region in the event of a primary region failure. # Azure SQL Source: https://octopus.com/docs/installation/sql-database/azure-sql.md Each Octopus Server node stores project, environment, and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it's important that the database server is also highly available. To host the Octopus SQL database in Azure, there are two options to consider: - [SQL Server on a Virtual Machine](https://docs.microsoft.com/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-server-iaas-overview/) - To run SQL Server on a VM, please refer to our [self-managed SQL Server guide](/docs/installation/sql-database/self-managed-sql-server). - [Azure SQL Database as a Service](https://docs.microsoft.com/azure/sql-database/sql-database-technical-overview/) ## Using Microsoft Entra in Azure SQL \{#using-entra-in-azure-sql} Octopus Deploy supports using [Microsoft Entra](https://docs.microsoft.com/en-us/sql/connect/ado-net/sql/azure-active-directory-authentication?view=sql-server-ver15#setting-azure-active-directory-authentication) for authentication. This includes the ability to use a [Managed Identity](https://docs.microsoft.com/en-us/sql/connect/ado-net/sql/azure-active-directory-authentication?view=sql-server-ver15#using-active-directory-managed-identity-authentication) when connecting to your Octopus database hosted in Azure SQL. :::div{.hint} For self-hosted installations of Octopus Deploy, you will need to click on the `Advanced` link to set the connection string to the Azure SQL instance during the installation process. ::: ## High Availability The database is a critical component of Octopus Deploy. If the database is lost or destroyed all your configuration will be lost with it. We recommend using [zone-redundant availability](https://learn.microsoft.com/en-us/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell#zone-redundant-availability) with Azure SQL to ensure the resilience and availability of your Octopus database. This will ensure that your replicated is spread across three Azure availability zones in the primary region. In the event of a failure of the primary zone Azure SQL automatically switches to another zone. ## Disaster Recovery For disaster recovery scenarios, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). To achieve this with Azure we recommend using [Active geo-replication](https://learn.microsoft.com/en-us/azure/azure-sql/database/active-geo-replication-overview?view=azuresql) or [Failover groups](https://learn.microsoft.com/en-us/azure/azure-sql/database/failover-group-sql-db?view=azuresql) in conjunction with zone-redundant availability. This will ensure that data is replicated asynchronously to a secondary region alongside the zone replication within the primary region. When deciding between these two options the main consideration would be how many databases you have running in Azure SQL. Active geo-replication is a great lightweight option for a single database. If you have other, non-Octopus, databases running in Azure SQL then Failover groups may be a better fit. This is especially true if you already have a group configured for your existing databases. In the event of a failover it would be necessary to re-configure the database connection string within Octopus to the secondary region. Azure's guide on [outage recovery](https://learn.microsoft.com/en-us/azure/azure-sql/database/disaster-recovery-guidance?view=azuresql#configure-your-database-after-recovery) covers other recommend steps and checks in more detail. :::div{.warning} When a disaster occurs, any data not synchronized will be lost. Depending on the replication speed, this could be up to a couple of minutes. ::: # Deploy a Helm chart Source: https://octopus.com/docs/kubernetes/steps/helm.md Helm Charts are like a package manager for Kubernetes applications, allowing users to reuse and share complex resource configurations. ## Helm chart sources You can source your Helm charts from two different sources: - Packages from Helm or OCI feeds - Git Repository ### Helm feed A Helm Feed in Octopus refers to a [Helm Chart repository](https://helm.sh/docs/topics/chart_repository/). This repository is effectively just an HTTP server that houses an `index.yaml` which describes the charts available on that server. Octopus uses this index file to determine the available "packages" (Charts) and versions. A chart is a tarball that looks like `alpine-0.1.2.tgz` which for this example Octopus will interpret as having PackageID `alpine` and version `0.1.2`. There are various ways you can host a chart repository, including third-party tools like [ChartMuseum](https://github.com/chartmuseum/chartmuseum), [Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation/kubernetes-helm-chart-repositories), [Cloudsmith](https://help.cloudsmith.io/docs/helm-chart-repository), or even hosting your own [static web server](https://helm.sh/docs/topics/chart_repository/#hosting-chart-repositories). :::figure ![Helm Feed](/docs/img/deployments/kubernetes/helm-update/helm-feed.png) ::: :::div{.info} The built-in repository is [capable of storing Helm Charts](/docs/packaging-applications/#supported-formats). However, the mechanism for determining the **PackageID** and **Version** may differ depending on the contents of the `.tgz` file. If the `.tgz` file contains a `chart.yaml` file, the PackageID is determined by the `name`, and the version is determined by the `version` sections of the YAML. ```yaml apiVersion: v2 name: petclinic-chart description: A Helm chart for Kubernetes type: application version: 1.0.0 appVersion: "1.16.0" ``` If the `.tgz` does not have a `chart.yaml` file, the PackageID and version are interpreted by the filename as described above. ::: For more information about Helm Chart repositories and how to run your own private repository, check out the living documentation on their [GitHub repo](https://helm.sh/docs/topics/chart_repository/). ### OCI-based registry feed The Open Container Initiative (OCI) is a lightweight, open governance structure (project), formed under the auspices of the Linux Foundation, for the express purpose of creating open industry standards around container formats and runtimes An OCI-based registry can contain zero or more Helm repositories and each of those repositories can contain zero or more packaged Helm charts. :::figure ![OCI Registry Feed](/docs/img/deployments/kubernetes/helm-update/oci-registry-feed.png) ::: For more information about using OCI-based registries and how to run your own private repository, check out the living documentation on their [GitHub repo](https://helm.sh/docs/topics/registries/). ### Git repository Sourcing your Helm charts from a Git Repository can streamline your deployment process by reducing the amount of steps required to get them into Octopus. To configure a Git Repository source, select the `Git Repository` option as your Chart Source. #### Database projects If you are storing your project configuration directly in Octopus (i.e. not in a Git repository using the [Configuration as code feature](/docs/projects/version-control)), you can source your charts from a Git repository by entering the details of the repository, including: - URL - Credentials (either anonymous or selecting a Git credential from the Library) When creating a Release, you choose the tip of a branch for your Helm charts. The commit hash for this branch is saved to the Release. This means redeploying that release will only ever use that specific commit and not the _new_ tip of the branch. #### Version-controlled projects If you are storing your project configuration in a Git repository using the [Configuration as code feature](/docs/projects/version-control), in addition to the option above, you can source your charts from the same Git repository as your deployment process by selecting **Project** as the Git repository source. When creating a Release using this option, the commit hash used for your deployment process will also be used to source the chart files. ## Helm upgrade step Since the [helm upgrade](https://docs.helm.sh/helm/#helm-upgrade) command provides the ability to ensure that the chart is installed when it runs for the first time (by using the `--install` argument), this upgrade command is the most practical step to provide. :::div{.success} Remember that since the Kubernetes cluster connection context is available via the kubectl script step, any helm commands that you want to perform that don't fit into the existing helm upgrade step can easily be scripted as per usual. ::: ### Upgrade options :::figure ![Upgrade options](/docs/img/deployments/kubernetes/helm-update/upgrade-options.png) ::: #### Kubernetes release The Kubernetes release uniquely identifies the released chart in the cluster. Because of the unique naming requirements of the release name, the default value provided includes both the project and environment name to ensure that successive Octopus releases do not conflict with one another. When redeploying new versions of the chart, this name is what is used to uniquely identify the resources that are related to that Octopus deployment. Helm requires that this name consist of only lowercase alphanumeric and dash (-) characters. :::div{.hint} Due to the design of Helm, the release names must be [unique across the entire cluster](https://github.com/helm/helm/issues/2060#issuecomment-287164881), not just namespaces. ::: #### Reset values By default, Helm will carry forward any existing configuration between deployments if not explicitly overridden. To ensure that the Octopus provided configuration acts as the source of truth, the `--reset-values` argument is set on the invoked command however this can be disabled if desired. #### Helm client tool Helm performs some strict version checks when performing any commands against the cluster and requires that the client have the same minor version as the tiller service (the helm component running in your cluster) in your Kubernetes cluster. :::div{.success} Like the other Kubernetes steps, the Octopus Server or workers will run the Helm commands directly during execution and need to have the `helm` executable installed. ::: Since it is quite common to have different versions of Helm across your deployment workers or even across different environments clusters, this option lets you override the helm client tool that is invoked. By default, Octopus will expect the helm command to be directly available to the execution context. Provide either the explicit full path to the desired version of the helm tool or include a version of helm as a package. The available version can be downloaded via the helm public [GitHub repository](https://github.com/helm/helm/releases). Unlike some other Octopus steps like [Azure PowerShell Scripts](/docs/deployments/custom-scripts/azure-powershell-scripts), the helm client tools are not automatically embedded or installed by Octopus. This is due to the strict version requirements that would differ between Octopus Server installations, and the diverse number of different platform builds available. ### Template values :::figure ![Template Values](/docs/img/deployments/kubernetes/helm-update/new-template-values.png) ::: The configuration for the Kubernetes resources required in a Helm Chart can be provided by making use of [Chart Templates](https://docs.helm.sh/chart_template_guide/). In each of the following options, the values files are passed into the `helm upgrade` command with the `-f` argument. - **Files in the chart:** If there are any other values files contained within the selected chart (by default, `./values.yaml` in the root of the package is picked up by helm), they can be referenced with this option. Octopus Variable replacement will be performed on the file before being used. This works with charts sourced from Packages or Git repositories. - **Files in a Git repository:** When using publicly available Helm Charts as the package source for this step, you may want to source your custom values files from outside Octopus, for example, through files committed to the Git repository. Files obtained through this option will have Octopus Variable replacement performed before being used. - **Files in a package:** When using publicly available Helm Charts as the package source for this step, you may want to source your custom values files from outside Octopus, for example, through files committed to a package repository. Files obtained through this option will have Octopus Variable replacement performed before being used. - **Key values:** This option provides the ability to quickly provide key/value pairs of template configuration. - **Inline YAML:** Standard Octopus [variable substitution syntax](/docs/projects/variables/variable-substitutions) can be used so long as the final contents are a valid YAML file. Except for **Files in the chart**, you can source template values from the same source type multiple times (e.g. you can source values from multiple different Git repositories). #### Reordering values sources The order of the template values sources dictates the precedence of how the values are applied. Sources are passed in reverse order into the `helm upgrade` command with the `-f` argument, which means that sources (and their values) higher in the list have higher precedence and will override the same value in a source with lower precedence. See more information regarding values files precedence in the Helm [documentation](https://helm.sh/docs/chart_template_guide/values_files/). To reorder the sources, click the **Reorder** button. In the following figure, the **Key values** source value for the **drink** key will take precedence over the value for the same key in the **Inline YAML** source. The value for the **drink** key defined in the chart default `values.yaml` file will be overridden. :::figure ![Ordering Template Values](/docs/img/deployments/kubernetes/helm-update/reorder-template-values.png) ::: ## Separating image updates from Helm chart updates Application updates are often rolled up into full Helm chart version updates because that is the only mechanism provided. When deploying Helm charts with Octopus Deploy, a values file can be used for container image tags so that your Helm charts only get updated when your infrastructure definition changes. ### Setting up referenced images with Helm chart deployments Using this [Helm hello-world chart](https://github.com/helm/examples/tree/main/charts/hello-world) as an example. For each of the container images referenced by the Helm chart, add a referenced image under **Docker image references**. The following snippet assumes the referenced package has the **Name** `nginx`. Edit the RAW Values YAML to update the image repository and tag. Take a look inside the example Helm chart to see how they've configured their template to use these values. ```yaml image: repository: #{Octopus.Action.Package[nginx].PackageId} tag: #{Octopus.Action.Package[nginx].PackageVersion} ``` ### Automatically creating releases Using referenced images with your Helm chart step allow [external feed triggers](/docs/projects/project-triggers/external-feed-triggers) to automatically create releases when one or more new images are pushed to your registries. Further to this, [lifecycles](/docs/releases/lifecycles) can be used to fully automate deploying your releases to selected environments. ## Known limitations :::div{.warning} Please note that [Cloud Dynamic Workers](/docs/infrastructure/workers/dynamic-worker-pools/#available-dynamic-worker-images) come with Helm 2.9.1 installed. This means that if you chose V3 on the Helm Step Template, it will fall back to V2 during execution. To get around this problem, use the [Execution Containers](/docs/projects/steps/execution-containers-for-workers) feature with the [worker tools image](https://hub.docker.com/r/octopusdeploy/worker-tools). ::: Helm provides [provenance](https://helm.sh/docs/topics/provenance/) tools that assist in verifying the integrity and origin of a package. Octopus does not _currently automatically_ perform validation checks during a deployment using these tools however this may change in the future. Although the helm client tool can be overridden for use during the step execution as noted above, the acquisition process currently requires a version of the helm client locally to retrieve the chart. The version of helm available does not need to match the version of the tiller service. :::div{.warning} Helm deployments using Tar.gz packages can fail if the path is 100+ characters, to get around this problem use ZIP packages or shorter paths/filenames instead. See [https://github.com/OctopusDeploy/Issues/issues/8132](https://github.com/OctopusDeploy/Issues/issues/8132) for more info. ::: :::div{.warning} Due to how deployment cancellation currently works, the Helm `--atomic` argument does not result in automatic rollbacks when a deployment is cancelled. This means that any Helm chart changes that were being deployed may become stuck or only partially deployed, and require manual clean-up. Furthermore, if the Octopus deployment timeout is set lower than the Helm timeout, a similar issue may arise if the Helm chart deployment is interrupted midway. To ensure a smooth deployment experience, we recommend setting a larger Octopus timeout than the Helm timeout. ::: ## Learn more - [Kubernetes blog posts](https://octopus.com/blog/tag/kubernetes/1) :::div{.hint} **Step updates** **2024.1:** - `Upgrade a Helm Chart` was renamed to `Deploy a Helm chart`. - Support was added for Helm charts stored in Git repositories. You can learn more in [this blog post](https://octopus.com/blog/git-resources-in-deployments). **2023.3.4127** - Support was added for Helm repositories stored in OCI-based registries. ::: # Storage Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/storage.md :::div{.info} The following is applicable to both Kubernetes Agent and Kubernetes Worker. ::: During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system. On a Kubernetes agent (or worker), scripts are executed in separate Kubernetes pods (script pod) as opposed to in a local shell (Bash/PowerShell). This means the Tentacle pod and script pods don’t automatically share a common file system. Since the Kubernetes agent/worker is built on the Tentacle codebase, it is necessary to configure shared storage so that the Tentacle Pod can write the files in a place that the script pods can read from. We offer two options for configuring the shared storage - you can use either the default ReadWriteOnce cluster default storage or specify a Storage Class name during setup: :::figure ![Kubernetes Agent Wizard Config Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png) ::: ## Cluster default ReadWriteOnce :::div{.info} This is a new default in v3 of the Kubernetes agent ::: By default, the Kubernetes agent will request the default storage class of the cluster and specify the `ReadWriteOnce` (also known as `RWO`) access mode. As each script pod needs access to the shared storage, this causes the script pods to be scheduled onto the same node as the main tentacle pod. As a result, by default, the Kubernetes agent does not spread its work across multiple nodes, but performs all work on the same node. This change was made from v2 due to reliability and security concerns with the previously default NFS storage. ## Custom StorageClass \{#custom-storage-class} If distribution of script pods across multiple nodes is desired, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode. Many managed Kubernetes offerings will provide storage that require little effort to set up. These will be a “provisioner” (named as such as they “provision” storage for a `StorageClass`), which you can then tie to a `StorageClass`. Some examples are listed below: | **Offering** | **Provisioner** | **Default StorageClass name** | | ----------------------------------------------------------------------------------------------------------- | ------------------------------ | ----------------------------- | | [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) | `file.csi.azure.com` | `azurefile` | | [Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) | `efs.csi.aws.com` | `efs-sc` | | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview) | `filestore.csi.storage.gke.io` | `standard-rwx` | :::div{.info} See this [blog post](https://octopus.com/blog/efs-eks) for a tutorial on connecting EFS to and EKS cluster. ::: If you manage your own cluster and don’t have offerings from cloud providers available, there are some in-cluster options you could explore: - [Longhorn](https://longhorn.io/) - [Rook (CephFS)](https://rook.io/) - [GlusterFS](https://www.gluster.org/) ## Azure Files CSI driver It is highly recommended that when specifying a custom storage class that leverages [Azure Files CSI driver](https://learn.microsoft.com/en-us/azure/aks/create-volume-azure-files), that the backing storage account be provision with the `PremiumV2_LRS` or `PremiumV2_ZRS` SKU (`skuname`). This will improve deployment performance due to the high performance profile and low-latency SSD's. # Automatically tracking third party helm charts Source: https://octopus.com/docs/kubernetes/tutorials/automatically-track-third-party-helm-charts.md With a growing number of applications being provided with Helm charts as a primary method of installation, often all that needs to be done is a `helm install` against your cluster and the application will be up and running. However, managing updates can be a more involved process. Not only do you need to know when a new release is available, but you also need to have someone with credentials to run the `helm upgrade` against your cluster. Meaning you'll either need to share important credentials among everyone performing updates or have only a few people busy performing these updates. Octopus Deploy provides a full workflow to manage updates, either hands on or fully hands off. ### Setting up the project A Helm chart deployment like this is simple with Octopus Deploy. 1. Start with the **Deploy a Helm chart** step 2. Link it to the required Kubernetes clusters via [target tags](/docs/infrastructure/deployment-targets/target-tags) 3. Reference the desired Helm chart 4. Configure the namespace and any values required for your application :::figure ![Helm chart deployment process](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-deployment-process.png) ::: Sample OCL for version controlled projects: ```ruby step "deploy-ingress-nginx-helm-chart" { name = "Deploy Ingress Nginx Helm Chart" properties = { Octopus.Action.TargetRoles = "kind" } action { action_type = "Octopus.HelmChartUpgrade" properties = { Octopus.Action.Helm.ClientVersion = "V3" Octopus.Action.Helm.Namespace = "nginx-local" Octopus.Action.Helm.ReleaseName = "ingress-nginx" Octopus.Action.Helm.ResetValues = "True" Octopus.Action.Package.DownloadOnTentacle = "False" Octopus.Action.Package.FeedId = "ingress-nginx" Octopus.Action.Package.PackageId = "ingress-nginx" } packages { acquisition_location = "Server" feed = "ingress-nginx" package_id = "ingress-nginx" properties = { SelectionMode = "immediate" } } } } ``` ### Helpful settings By default, Octopus will start versioning releases from `0.0.1` and count up patch versions from there. Helm charts already have a meaningful version number that you may wish to use instead. You can change our releases to track the Helm chart version by heading to the project settings and changing the release versioning rule to use the version number from our deployment step. :::figure ![Change release versioning](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-versioning-rule.png) ::: Sample OCL for version controlled projects: ```ruby versioning_strategy { donor_package { step = "deploy-ingress-nginx-helm-chart" } } ``` ### Creating the trigger Triggers can be created directly from the deployment process by clicking the **Create a trigger** link, or by navigating to the **Triggers** page and clicking **Add Trigger**. Enter a name and a select which container images or Helm charts you'd like to watch for updates. In this example, the Default channel has a lifecycle that will automatically deploy to the Development environment for testing, more on that later. :::figure ![Helm chart create trigger](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-create-trigger.png) ::: Once the trigger is created, you can watch the triggers execution history. Within a couple of minutes you'll see your very first release created. :::figure ![Helm chart trigger history](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-trigger-history.png) ::: ### Automatic deployment strategies Back on the project dashboard, you can see the release isn't only created but successfully deployed to your cluster as well. :::figure ![Helm chart deployed release](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-deployed-release.png) ::: But what if there was only a production environment? You may be a bit more careful with deploying updates the moment they are released. You can control this with channels and lifecycles. First [create a new lifecycle](/docs/releases/lifecycles), called Production here. :::figure ![Helm chart production lifecycle](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-production-lifecycle.png) ::: Then [create a channel](/docs/releases/channels) in the project that uses this lifecycle. :::figure ![Helm chart production channel](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-production-channel.png) ::: Back in the trigger, change the channel to Production instead. :::figure ![Helm chart trigger production channel](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-trigger-production-channel.png) ::: New releases will remain undeployed until someone has time to manually review the changes and click **Deploy**. :::figure ![Helm chart undeployed release](/docs/img/deployments/kubernetes/automatically-track-third-party-helm-charts/helm-chart-undeployed-release.png) ::: These are two simple cases, take a look at [environment recommendations](/docs/infrastructure/environments/environment-recommendations) for more information on what's possible. #### Getting notified about new releases Now you have a list of releases created and waiting to be deployed. This isn't very useful if no one knows about it. Octopus deploy offers a quick and easy notification service through [subscriptions](/docs/administration/managing-infrastructure/subscriptions) that allow you to send the right people an email or message whenever a release is created in a particular project. # Migrating to Octopus Cloud Source: https://octopus.com/docs/octopus-cloud/migrations.md Migrating from a self-hosted instance of Octopus Deploy to Octopus Cloud can streamline your deployment processes by removing infrastructure overhead while ensuring you continue to enjoy the robust capabilities of Octopus. This guide outlines the benefits of Octopus Cloud, the effort involved in migrating, and step-by-step instructions to help you have a smooth transition. For large or complex migrations, unsupported scenarios or any questions, we strongly recommend contacting our [Sales team](mailto:sales@octopus.com). We’re always happy to help, and we can provide more specific information when you are ready to migrate. ## Benefits of migrating to Octopus Cloud Before diving into the migration process, it’s worth evaluating the benefits of Octopus Cloud. Octopus Cloud is the easiest way to run Octopus Deploy. It has the same functionality as Octopus Server, delivered as a highly available, scalable, secure SaaS application hosted for you. You get the best Octopus experience from the experts in hosting, maintaining, scaling, and securing Octopus Deploy. We recommend Octopus Cloud over Octopus Server for the following reasons: - **Minimize downtime and increase resilience** - We handle backups, upgrades, and maintenance so you don’t have to worry about downtime, data loss, or disruptions - **Secure and compliant out-of-the-box** - Peace of mind with internationally recognized security standards (ISO 27001 and SOC II certifications), ensuring business compliance and protecting your reputation. - **Faster Feature Updates**: Access the latest Octopus Deploy features automatically. - Automatic upgrades to the latest version of Octopus, including improvements, bug fixes, security enhancements, and new features - **Effortlessly scale your deployments** - Your teams can scale their use without the hassle of resource management and additional infrastructure costs - **Cost Efficiency**: Reduce infrastructure and operational costs. In short, Octopus Cloud offloads your maintenance burden and provides the best experience for the majority of our customers. However, if your organization primarily uses self-hosted tools, you may encounter some challenges enabling connectivity between Octopus Cloud and other resources and tools within your ecosystem. If you're uncertain whether Octopus Cloud is the right choice for your organization, contact our [Sales team](mailto:sales@octopus.com) to discuss your needs and determine the best fit. ## Migration assessment and planning ### Estimating Migration Effort Before you start planning your migration, it’s worth setting some expectations upfront about the level of effort involved. No two instances are identical, so this doesn't cover every possible scenario. Based on our experience, the figures here are estimates you can use to plan how long your own migration may take. | **Instance Size** | **Characteristics** | **Effort** | | ------------------------- | ---- | ----------- | | Small and/or simple |
  • 10 or fewer projects
  • 10 or fewer deployment targets
  • Integrations with cloud-based products only
  • No config as code
| Migration typically takes 1-3 days with minimal manual configuration. | | Medium |
  • 10–50 projects
  • Integrations with a mix of self-hosted and cloud-based products
  • Fewer than 100 deployment targets
| Migration requires thorough planning, testing, and may take multiple weeks to migrate. | | Large or complex |
  • 50+ projects
  • Advanced configurations
  • Integrations primarily with other self-hosted tools
  • More than 100 deployment targets
| Migrations may take several weeks or months of preparation, testing, and execution. | #### Effort factors to refine your estimate Use this checklist to guide you as to the complexity of your migration. The more effort factors you need to resolve, the longer or more complex your migration will be. | **Question** | **Considerations** | | ------------------------- | ----------- | | What version of Octopus are you using? | You should run the most recent release before migration to minimize feature differences with Octopus Cloud. At a minimum, [Export/Import Projects](https://octopus.com/docs/projects/export-import) is only available in Octopus 2021.1.x or higher. | | How many projects do you have? | The more projects you have the more time your migration is likely to take. | | How many runbooks do you have? | Runbooks may need to be updated manually where worker pools have changed, and sensitive values or variables are used. | | How many Tentacles do you manage? | You’ll need to achieve connectivity between Octopus Cloud and all the Tentacles you use.

We also recommend converting Listening Tentacles to Polling Tentacles for a Cloud Instance. | | Are all the resources you need access to reachable from Octopus Cloud?
  • Private, self-hosted package repositories (Artifactory, Nexus, etc)
  • On-prem listening tentacle or ssh targets and workers
  • An internal SMTP server
  • Other on-prem integrations that are not reachable from Octopus Cloud
|You’ll need to achieve connectivity between Octopus Cloud and all the integrations and resources you use.

How are firewall and VPN configurations typically managed to ensure secure connections? Do you anticipate any network, firewall, or VPN configuration updates will be needed to support this migration? | | Do you have specific security requirements or policies around accessing deployment targets or managing API keys? | You will need to manage these through the migration. | | Do you need to retain historical data and task logs? | Historical data and task logs are **not supported by Export/Import Projects**. You will need to continue to host the older version to retain historical data. | | How long are your Audit log retention policies? | Octopus Cloud archived audit logs has **a max of 365 days** | | Do you store build information? | You will need to continue to host the older version to retain historical data, as build information **is not migrated**. | | Do you have any subscriptions? | Subscription migration is **not supported by Export/Import Projects**. These will need to be migrated manually or with a script. | | How many variable sets do you use? | Variable sets should be named uniquely. When importing, if a variable set with the same name already exists, the variables will be merged. If a variable in the export doesn’t exist on the destination, it will be created. If a variable with the same name and scopes already exists, the variable on the destination will be left untouched. | | Do you use project triggers? | Project trigger migration is **not supported by Export/Import Projects**. These will need to be migrated manually or with a script. | | Are you using the built-in package repository? | Package repository migration is supported via [this script](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Feeds/SyncPackages.ps1). | | Do any Projects use the Built-in Octopus Worker? | The Built-in Octopus Worker is **not a feature available in Octopus Cloud.** | | Are you using any Targets with older, unsupported operating systems (RHEL 6, Windows 2008, etc)? | Octopus Cloud is the most up-to-date version of Octopus, and Targets running older operating systems are **not supported**. | | Are you using Active Directory or LDAP authentication? | Active Directory or LDAP authentication is **not a feature available in Octopus Cloud**. | | Do you have any ITSM integrations or other automated processes? | Automated processes and shared settings, such as workflows for routine tasks or usage of library variable sets across projects, need to be set up again after migration. This includes common scripts, standardized settings, or configurations supporting multiple projects. | | What are your cutover requirements? Can you stagger the migration or incur downtime?| A big-bang migration, where everything is transitioned simultaneously, is harder than an incremental approach, where you migrate project-by-project or in defined phases. | If your instance includes **unsupported features** or matches several of the **effort factors**, migration complexity increases. For these cases, our [Sales team](mailto:sales@octopus.com) can help you identify workarounds, plan for manual adjustments, or determine if Octopus Cloud is the right fit for you. ### Migration approach: self-serve or supported? If you’re confident Octopus Cloud is right for you, and you have a relatively straightforward migration path, we encourage you to use this guide to get started and wish you a speedy and smooth migration. If you're uncertain whether Octopus Cloud is the right choice for your organization or there are complicating factors in your migration, we recommend you contact our [Sales team](mailto:sales@octopus.com) to discuss your needs and determine the best fit. We can discuss several options, from extending your trial during a longer migration to connecting you with a professional services partner who can help you complete the migration. #### Pilot project migration If you need more precise estimates or to understand the complexity of the challenges you may face, migrate a pilot project to see how long one project takes. You will get more efficient with each additional project; however, this is a good base for multiplying your efforts. ## Self-directed migration using Export/Import ### Overview :::div{.hint} This guide uses the **[Export/Import Projects](https://octopus.com/docs/projects/export-import)** feature as the recommended approach to migrating to Octopus Cloud. ::: Using this guide as a basis for your migration, your project will roughly break down into the following phases: 1. Preparation 1. Proof of Concept Deployments 1. Migration 1. Clean up and decommission We’ll step through each of these phases and provide pointers and tips to ensure a successful outcome at each stage. :::div{.problem} **Historical Data is not included in the migration.** The Export/Import Projects feature will create releases and “shells” of your deployments. The releases are created so you can promote existing releases through your environments. The deployments are created because lifecycle conditions need to be met prior to those releases being promoted. We recommend making a backup of your self-hosted DB including historical data should you need it for audit and compliance purposes, as the deployments will **not** include: - Task Log (the deployment screen will be blank) - Artifacts - Task History (including, but not limited to): - Who created the deployment - When the deployment was created - When the deployment started - When the deployment finished - Guided Failure logs - Manual Intervention logs - Audit History - Event Exports ::: ## 1. Preparation **Before starting your migration to Octopus Cloud, you will need to address the following:** 1. Understand the differences between Octopus Cloud and your Octopus Server. 1. Upgrading your Octopus Server instance to the latest release of Octopus Deploy. 1. Determine whether you need to convert your [Listening tentacles to Polling Tentacles](https://octopus.com/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) for your deployment targets and workers. 1. Creating your Octopus Cloud instance. 1. Configuring any firewall settings for your tentacles. 1. Configuring workers and worker pools. 1. Testing external package repository connectivity. 1. Creating your Octopus Cloud users. ### 1. Differences between Octopus Cloud and Octopus Server Octopus Cloud and Octopus Server are built on the same code base. The differences stem from the additional configuration steps we perform during the Octopus Cloud build. The differences are: | | **Self-host** | **Cloud** | | - | ------------- | --------- | | Upgrades | Quarterly upgrades are made available for you to apply to your instance. | We upgrade your instance continuously | | Infrastructure | Your responsibility. | Our responsibility | | Compliance | Your responsibility. | ISO 27001 and SOC II certifications with regular audits, ensuring your deployments and data are safe and secure | | Roles | Highest level of user privileges is the role of Octopus Administrator | Highest level of user privileges is the role of Octopus Manager | | Auth | | Octopus Cloud does not support Active Directory or LDAP. Please see the [authentication provider compatibility page](https://octopus.com/docs/security/authentication/auth-provider-compatibility) for an up to date list of what is available | | Storage limits | Your responsibility. | Octopus Cloud is subject to [storage limits and default retention policies](https://octopus.com/docs/octopus-cloud/#octopus-cloud-storage-limits).

  • Maximum file storage for artifacts, task logs, packages, package cache, and event exports is limited to 1 TB.
  • Maximum database size for configuration data (for example, projects, deployment processes, and inline scripts) is limited to 100 GB.
  • Maximum size for any single package is 5 GB.
  • [Retention policies](https://octopus.com/docs/administration/retention-policies) default to 30 days, but you can change this figure as needed.
    If any of these limits are a concern for your migration, please reach out to our [Sales team](mailto:sales@octopus.com).
| | Functional differences | | Octopus Cloud does not support running tasks on the server itself. Everything must run on a deployment target or worker. To help, Octopus Cloud includes [dynamic worker pools](https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools) with both Windows and Linux workers. | Before starting your migration, please ensure you are familiar with these fundamental differences (and limitations). Depending on your requirements, Octopus Cloud, in its current form, might not be suitable for you. If any of these limitations are deal-breakers, we’d love to know; please contact our [Sales team](mailto:sales@octopus.com). We are constantly improving Octopus Cloud; a current limit has a strong likelihood of changing in the future. ### 2. Upgrading your Octopus Server instance to the latest release of Octopus Deploy You must be running Octopus **2021.1.x** or higher to leverage the [Export/Import Projects](https://octopus.com/docs/projects/export-import) feature in order to migrate your projects. We recommend upgrading to the latest version of Octopus Deploy prior to starting your upgrade. ### 3. Determine whether you need to convert your [Listening tentacles to Polling Tentacles](https://octopus.com/docs/infrastructure/deployment-targets/tentacle/tentacle-communication) for your deployment targets and workers Listening Tentacles require an inbound connection from Octopus Cloud to your infrastructure. Listening tentacles are required to have a public hostname or IP address Octopus Cloud can see. Polling Tentacles require an outbound connection from your infrastructure to Octopus Cloud. Because of that difference, our customers tend to use Polling Tentacles. ### 4. Create your Octopus Cloud instance The remaining prep work involves testing connectivity. If you haven't already, you will need to create a [free Octopus account](https://octopus.com/free-signup) now. ### 5. Firewall settings for your Tentacles Regardless of your tentacle communication mode, ensure you have the appropriate firewall rules configured. The default rules are: - Listening Tentacle - Port `443` outbound (to register the tentacle with Octopus Cloud) - Port `10933` inbound (communications) - Polling Tentacle - Port `443` outbound (to register the tentacle with Octopus Cloud) - Port `10943` or `443` outbound (communications). [We recommend using Port 10943](https://octopus.com/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443). :::div{.hint} Our recommendation is to create a test server in each of your data centers, install a tentacle on it with the desired communication mode, and register it with Octopus Cloud. Work out any firewall configuration issues before starting the migration. ::: ### 6. Configuring Workers and Worker Pools Octopus Cloud does not support running steps directly on the server. Instead, we provide each Octopus Cloud instance with [dynamic workers](https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools). The dynamic workers are there to help get you started but have the following limitations: - Dynamic workers cannot see any of your internal infrastructure. That includes file shares, database servers, and internal load balancers. - Dynamic workers cannot see any cloud infrastructure behind a firewall or on a virtual network with restricted access. That includes K8s clusters, database servers, file shares, load balancers, and more. - Dynamic workers have a max life of **72 hours**. While you can install software on a dynamic worker, your deployment process will need to ensure any required software is installed at the start of each deployment. Our recommendation is to: 1. Create a worker pool (or pools) per local data center or cloud provider. For example, if you have a data center in Omaha and are using AWS, you’d have two worker pools, one for your Omaha data center and another for AWS. 1. If you already use or are comfortable using Kubernetes, we recommend using Kubernetes worker — a scalable worker developed for optimal use of compute when running multiple deployment tasks. To use the Kubernetes worker, you need to install a worker once on a cluster and [configure autoscaling](https://octopus.com/blog/kubernetes-worker).
If you cannot use Kubernetes, create virtual machines and install tentacles as workers for each worker pool. For redundancy, we recommend a minimum of two (2) workers per worker pool. Install any required software on each worker. 1. Consider leveraging [execution containers](https://octopus.com/docs/projects/steps/execution-containers-for-workers). If you use Kubernetes worker, these containers will run as pods on a cluster. If you use virtual machines, the containers will run in Docker. 1. Change your deployment and runbook processes to target the appropriate worker pool. You can leverage [worker pool variables](https://octopus.com/docs/projects/variables/worker-pool-variables) to change worker pools per environment. Ensure all deployments and runbooks work as expected. :::div{.warning} Please do not skip Step 4. In doing step 4, you will start your migration in a known good state. If you change to workers after migration to Octopus Cloud, you will have changed two things (workers and your instance), making it much harder to troubleshoot. ::: ### 7. Testing External Package Repository Connectivity If you use an external package repository, such as a self-hosted Artifactory instance, you’ll need to test that Octopus Cloud can see and connect to it. You might have to expose that server to the internet, or leverage a [proxy server](https://octopus.com/docs/infrastructure/deployment-targets/proxy-support/#external-nuget-feed). ### 8. User migration The project export/import feature does not include users. All users must be created from scratch. If you are using an external authentication provider, such as Azure AD, or Okta, you can turn on the [Automatic user creation](https://octopus.com/docs/security/authentication/auto-user-creation) feature. ## 2. Migration The migration will use the **Export/Import Projects** feature. This feature was specifically designed for [migrating from Octopus Server to Octopus Cloud](https://octopus.com/docs/projects/export-import). Our recommendations when using this tool are: - Migrate using a phased approach over migrating everything at once. Migrate a project group or suite of applications to Octopus Cloud, test some deployments, then move onto the next batch. - The first couple of projects will take more time as you work through any configuration issues. As such, pick some non-mission-critical projects or applications first. Your process for each project or application will generally follow these steps: 1. **Export and Import** the project from your Octopus Server instance into your Octopus Cloud instance. 1. Upload any packages, project images, and reconfigure triggers. 1. Copy or Create deployment targets. 1. Update your build or CI server to connect to Octopus Cloud for that application. 1. Test to ensure the migration was successful, and your deployment targets are online. 1. Disable the project in your Octopus Server instance. We recommend choosing an “off-cycle” or “slow time” whenever possible to keep any potential impact to a minimum. The last thing you want is to change your deployment process in the middle of a project with a tight deadline. :::div{.hint} Following this approach, you will have a time period with both an Octopus Server instance and an Octopus Cloud instance. ::: ### 1. Export and import the project Follow the instructions on [exporting and importing page](https://octopus.com/docs/projects/export-import) to export and import a project. Make a note of what is *not* exported. Releases and deployments are exported, but only “shells” (not the full deployment) to ensure any pre-existing releases can be promoted. ### 2. Upload any packages, project images, and reconfigure triggers As stated on the [export and import page](https://octopus.com/docs/projects/export-import/#what-is-imported), packages, project images, and project triggers are **not exported**. If you have any pre-existing releases you intend to promote and use the internal package feed; you’ll need to manually upload packages associated with those releases. You will also have to upload project images and reconfigure any triggers. ### 3. Copy or Create Deployment Targets A Windows or Linux server can have [1 to N tentacle instances](https://octopus.com/docs/administration/managing-infrastructure/managing-multiple-instances). Our recommendation is to create a second tentacle instance on your server. 1. Original Tentacle Instance -> connects to your Octopus Server. 1. New Tentacle Instance -> connects to Octopus Cloud. We have a [script to help create](https://github.com/OctopusDeployLabs/SpaceCloner/blob/main/docs/UseCase-CopyExistingTentacles.md) a cloned tentacle instance pointing to Octopus Cloud. You can copy a listening tentacle as a polling tentacle, a polling tentacle as a polling tentacle, or a listening tentacle as a listening tentacle. :::div{.hint} That script requires PowerShell 5.1 or greater for Windows. We recommend PowerShell 7. ::: That script only works for servers running tentacles. Any other deployment targets, such as Azure Web Apps, Kubernetes clusters, or SSH targets, will need to be manually recreated. ### 4. Update your build server For the project(s) you have migrated, update the corresponding build configurations in your build server. Updating the build server will typically involve: - Ensuring you have the latest build server plug-in installed. - Updating the Octopus URL - Updating the Octopus API Key ### 5. Test the migration After the build server has been updated, create a small change to trigger your CI/CD pipeline for that application to test: - The build server to Octopus Cloud connection is working. - Octopus Deploy can connect to your deployment targets and workers. - Octopus Deploy can successfully deploy to your deployment targets. ### 6. Disable the project in your Octopus Server instance Disabling a project will prevent it from being able to create and deploy releases. It is also an excellent signal to all Octopus users that the project has been migrated. - Go to **Project Settings** - Click the overflow menu (`...`) - Select **Disable** on the menu If anything goes wrong immediately after the migration, you can re-enable this project so your application can still be deployed while troubleshooting the migration. ## 3. Clean up & deprecate ### 3.1 Deprecate your Octopus Server instance Eventually, you will migrate all your projects over to Octopus Cloud. When that day comes, we recommend [turning on maintenance mode](https://octopus.com/docs/administration/managing-infrastructure/maintenance-mode/) and setting the [task cap to 0](https://octopus.com/docs/support/increase-the-octopus-server-task-cap) on your Octopus Server. That will make your Octopus Server read-only. No new deployments will be triggered. Keep this running for a short while to review any old audit logs. At this point, we recommend deleting all the tentacle instances still pointing to your Octopus Server instance. You can run this script in the [script console](https://octopus.com/docs/administration/managing-infrastructure/script-console) to delete the original tentacle instance. Please test this on a few non-production servers first. ```powershell & "C:\Program Files\Octopus Deploy\tentacle\tentacle.exe" delete-instance --instance="Tentacle" ``` In our experience, most people turn off their Octopus Server in about three to six months. When you decide to turn off your Octopus server, first take a full backup of the database and delete all the appropriate resources. ## Older versions - The **Export/Import Projects** feature is available from Octopus Deploy **2021.1** onwards. - Prior to version **2025.2.5601**, Config-as-Code projects were not supported by the **Export/Import Projects** feature. ## No longer offered or supported Please note that our existing [Migration API](https://octopus.com/docs/octopus-rest-api/migration-api) is **not supported** for migrations to cloud instances due to configuration differences between self-hosted and cloud installations. The legacy [Data Migration](https://octopus.com/docs/administration/data/data-migration) included with Octopus Deploy is **not supported** for migrations to cloud instances. That tool is a Windows command-line application that must be run directly on the server hosting Octopus Deploy via an RDP session. Octopus Cloud runs on our Linux Container image on a Kubernetes Cluster and therefore access to the Container is not permitted for security reasons. # octopus build-information Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-build-information.md Manage build information in Octopus Deploy ```text Usage: octopus build-information [command] Aliases: build-information, build-info Available Commands: bulk-delete Bulk delete build information delete Delete a build information help Help about any command list List build information upload upload build information for one or more packages to Octopus Deploy view View a build information Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus build-information [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus build-information upload ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Certificates Source: https://octopus.com/docs/octopus-rest-api/examples/certificates.md You can use the REST API to create and manage your [certificates](/docs/deployments/certificates) in Octopus. Typical tasks can include: - [Create a certificate](/docs/octopus-rest-api/examples/certificates/create-certificate) - [Replace existing certificate](/docs/octopus-rest-api/examples/certificates/replace-certificate) # Working directly with the Client Source: https://octopus.com/docs/octopus-rest-api/octopus.client/using-client-directly.md For some operations not available through [repositories](/docs/octopus-rest-api/octopus.client/using-resources), it will be necessary to use the `IOctopusClient` type:
PowerShell ```powershell $connection = $repository.Client.Get($machine.Links["Connection"]); ```
C# ```csharp // Sync var connection = repository.Client.Get(machine.Links["Connection"]); // Async var connection = await client.Get(machine.Links["Connection"]); ```
The entire API is accessible by traversing links - each resource carries a collection of links, like the `Connection` link on `MachineResource` shown above. :::div{.warning} Always access objects by traversing the links; avoid using direct url segments, as they may change in the future. ::: To start traversing links, the `IOctopusClient.RootDocument` is provided:
PowerShell ```powershell $link = $repository.Client.RootDocument.Links["CurrentUser"].ToString() $method = $repository.Client.GetType().GetMethod("Get").MakeGenericMethod([Octopus.Client.Model.UserResource]) $me = $method.invoke($repository.Client, @($link, $null)) ```
C# ```csharp // Sync var me = repository.Client.Get(repository.Client.RootDocument.Links["CurrentUser"]); // Async var me = await client.Get(client.RootDocument.Links["CurrentUser"]) ```
*(This is only an example. This common operation is also available via `repository.Users.GetCurrent()`.)* # Check services Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/checkservices.md The `checkservices` command checks the Octopus Server instances to see if they are running and start them if they're not. The [watchdog](/docs/administration/managing-infrastructure/service-watchdog) command sets up a scheduled task that calls `checkservices`. **Check Services options** ``` Usage: octopus.server checkservices [] Where [] is any of: --instances=VALUE Comma-separated list of instances to check, or * to check all instances Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example checks to see if all instances are running on the machine and start them if they are not: ``` octopus.server checkservices --instances=* ``` # Using OpenID Connect with the Octopus API Source: https://octopus.com/docs/octopus-rest-api/openid-connect.md Octopus supports using [OpenID Connect (OIDC)](https://openid.net/) to access the Octopus API without needing to provision API keys. :::div{.hint} OIDC to access the Octopus API is used for machine-to-machine scenarios such as automating release creation in CI servers. See [authentication providers](/docs/security/authentication) for information on configuring user authentication into Octopus Deploy. ::: ## What is OpenID Connect and how is it used in Octopus? OpenID Connect is a set of identity specifications that build on OAuth 2.0 to allow software systems to connect to each other in a way that promotes security best practices. When using OIDC, Octopus will validate an identity token coming from a trusted external system using [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography) and issue a short-lived access token which can then be used to interact with the Octopus API. Some of the benefits of using OIDC in Octopus include: - API keys do not need to be provisioned and stored in external systems, reducing the risk of unauthorized access to the Octopus API from exposed keys. - API keys do not need to be rotated manually by administrators, reducing the risk of disruption when updating to newer keys in external systems. - Access tokens issued by Octopus are short-lived, reducing the risk of unauthorized access to the Octopus API. - Access tokens are only issued for requests from trusted external systems, allowing for controlled access to service accounts and promoting using the principle of least access. :::div{.hint} Using OIDC to access the Octopus API is only supported for service accounts, to access the API for a user account please use [an API key](/docs/octopus-rest-api/how-to-create-an-api-key). ::: Any issuer that can generate signed OIDC tokens which can be validated anonymously is supported, however first-class support for GitHub Actions is provided with the [`OctopusDeploy/login`](https://github.com/OctopusDeploy/login) action. ## Getting started with GitHub Actions Follow the guide below to get started using OIDC with GitHub Actions. For more complex scenarios, or for a full list of available options, see [Using OpenID Connect with Octopus and GitHub Actions](/docs/octopus-rest-api/openid-connect/github-actions). ### Create an OIDC identity for a service account The first step is to create an OIDC identity for your GitHub repository to allow workflow runs to access the Octopus API. 1. Go to Configuration -> Users and either create a new service account or locate an existing one. 2. Open the OpenID Connect section. 3. Click the New OIDC Identity button. 4. Select GitHub Actions as the issuer type. 5. Enter the details of your repository and how you want to filter the workflow runs that can authenticate using OIDC. 6. Click Save. :::div{.hint} Multiple OIDC identities can be added for a service account, these could be for workflow runs from the same repository, or separate repositories depending on your needs. ::: :::figure ![OIDC Identity for GitHub Actions](/docs/img/octopus-rest-api/images/oidc-identity-github-actions.png) ::: ### Add the `OctopusDeploy/login` action to your workflow After the OIDC identity for GitHub Actions has been created, a snippet of the `OctopusDeploy/login` step will be provided which you can use in your workflow to configure the workflow run job to use OIDC authentication. :::figure !['OctopusDeploy/login' snippet](/docs/img/octopus-rest-api/images/oidc-github-actions-details.png) ::: 1. Click Copy to Clipboard to copy the `OctopusDeploy/login` step. 2. Paste the `OctopusDeploy/login` step into your workflow job. 3. Add `id-token: write` to the `permissions` on the workflow job. This is required to allow the `OctopusDeploy/login` action to request an OIDC token from GitHub to use. :::div{.hint} When `permissions` are specified on a workflow job, any built-in permissions for the job are reset. This means that some existing steps in your workflow may now require setting explicit permissions in order to work correctly. For example to checkout source code using the `actions/checkout` action you will need to add `contents: read` to the permissions. For more information see [Assigning permissions to jobs](https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs/). ::: 4. Add any additional Octopus provided GitHub Actions that you require e.g. [`OctopusDeploy/create-release-action`](https://github.com/OctopusDeploy/create-release-action). These actions will automatically work with OIDC. Any script steps that use the `octopus` cli will also automatically work with OIDC. When the workflow runs the `OctopusDeploy/login` action will authenticate with Octopus using OIDC and configure the remainder of the workflow job to work without needing to provide the `server` or `api_key` values. ```yaml name: Create a release in Octopus on: push: branches: - main jobs: create_release: runs-on: ubuntu-latest name: Create a release in Octopus permissions: # Add any additional permissions your job requires here id-token: write # This is required to obtain the ID token from GitHub Actions contents: read # For example: this is required to check out code, remove if not needed steps: - name: Log into Octopus uses: OctopusDeploy/login@v1 with: server: https://my.octopus.app service_account_id: 5be4ac10-2679-4041-a8b0-7b05b445e19e - name: Create Octopus release uses: OctopusDeploy/create-release-action@v3 with: space: Default project: MyOctopusProject ``` ## Getting started with other issuers Follow the guide below to get started using OIDC with other issuers. For more complex scenarios, or for a full list of available options, see [Using OpenID Connect with Other Issuers](/docs/octopus-rest-api/openid-connect/other-issuers). ### Create an OIDC identity for a service account The first step is to create an OIDC identity for your issuer to access the Octopus API. 1. Go to Configuration -> Users and either create a new service account or locate an existing one. 2. Open the OpenID Connect section. 3. Click the New OIDC Identity button. 4. Select Other Issuer as the issuer type. 5. Enter the URL of the identity. Octopus uses OpenID Configuration Discovery to validate the OIDC token provided by the issuer. 1. The issuer URL must be HTTPS. 2. The URL should be the base where the OIDC Discovery endpoint (`/.well-known/openid-configuration`) endpoint can be found. For example if the discovery endpoint is `https://my-oidc-issuer.com/.well-known/openid-configuration` then the issuer should be set to `https://my-oidc-issuer.com`. 6. Enter the subject of the identity. This must match exactly the subject that is provided in the OIDC token and is _case-sensitive_. The format of the subject will differ by issuer, please consult your OIDC issuers documentation. 7. Click Save. :::div{.hint} Multiple OIDC identities can be added for a service account. ::: :::figure ![OIDC Identity for other issuer](/docs/img/octopus-rest-api/images/oidc-identity-other-issuer.png) ::: ### Exchange an OIDC token for an Octopus access token After the OIDC identity has been created it can be used as part of exchanging an OIDC token for an Octopus access token. A Service Account Id will be shown, this will be a GUID which must be supplied as the `aud` of the ID token, as well as in the token exchange request. :::figure ![Other issuer audience details](/docs/img/octopus-rest-api/images/oidc-other-issuer-details.png) ::: 1. Obtain an OIDC token from the issuer, the `aud` claim must be the Service Account Id. The process for obtaining the OIDC token from the issuer will differ by issuer, please consult your OIDC issuers documentation. 2. Get the token exchange endpoint for your Octopus server from the `token_endpoint` property of the OpenID Connect Discovery endpoint `https://my-octopus-server.com/.well-known/openid-configuration`. 3. Exchange the OIDC token for an Octopus access token, setting `audience` property to the Service Account Id from above. See [Exchanging an OIDC token for an Octopus access token](/docs/octopus-rest-api/openid-connect/other-issuers#OidcOtherIssuers-TokenExchange) for more details on the token exchange request. 4. Get the `access_token` from the token exchange response. ### Using the access token to access the Octopus API The access token obtained from the token exchange must be supplied in the `Authorization` header of API requests, using the `Bearer` scheme, for example `Authorization: Bearer {the-access-token}`. ## Using the Octopus CLI with OIDC The [Octopus CLI](https://github.com/OctopusDeploy/cli) supports a command `login` which can be used to authenticate using OIDC, providing the Octopus Server URL, the id of the service account and the ID token from your OIDC provider. This can be used as part of your CI server workflows where you are using the CLI but currently provisioning an API key. After authenticating using OIDC, the `login` command will configure the CLI environment to be used. Usage: ``` octopus login --server {OctopusServerUrl} --service-account-id {ServiceAccountId} --id-token {IdTokenFromProvider} ``` For example: ``` octopus login --server https://my.octopus.app --service-account-id 834a7275-b5b8-42a1-8b36-14f11c8eb55e --id-token eyJhbGciOiJQUzI1NiIs... ``` ## Validation of OIDC identity tokens When an OIDC identity token from an external system is received as part of a token exchange request, Octopus will validate this token before issuing an access token. It does this by: - Matches the details of the token to an OIDC identity on an Octopus [service account](/docs/security/users-and-teams/service-accounts) using the audience (`aud`), issuer (`iss`) and subject (`sub`). - Obtains the public keys that can used to verify the signed token using the OIDC Discovery endpoint (`/.well-known/openid-configuration`) of the issuer. For example an issuer URL `https://my-oidc-issuer.com` will use the `https://my-oidc-issuer.com/.well-known/openid-configuration` endpoint to locate the URL for signing keys. - Verifies the token is signed correctly using public key cryptography to ensure that it has not been tampered with in transit and comes from the expected issuer. ### Debugging validation issues If you are encountering issues using OIDC validating identity tokens from your OIDC provider as part of a token exchange request, you can use the following to help diagnose the issue: - Check the audience (`aud`), issuer (`iss`) and subject (`sub`) of the token match the configured OIDC identity on the Octopus service account. - The audience must be the id of the service account and will be a GUID. - The issuer must be a URL using the HTTPS scheme. - The subject must match the configured subject on the OIDC identity and is _case-sensitive_. Support is available to include wildcard characters in the subject using `*` and `?` for multiple and single character matches respectively. - If you are making the token exchange request manually (e.g. using an [issuer other than GitHub Actions](/docs/octopus-rest-api/openid-connect/other-issuers)), check that the required fields are set correctly. See [Exchanging an OIDC token for an Octopus access token](/docs/octopus-rest-api/openid-connect/other-issuers#OidcOtherIssuers-TokenExchange) for more information on the request format. - Check that the token has not expired (`exp`). Often identity tokens created by OIDC providers will have a short lifetime. - Check that the token is signed by a valid key from the issuer. Signing keys may be invalidated by providers under some circumstances. - Check that the public key used to sign the token are available using [OpenID discovery](https://openid.net/specs/openid-connect-discovery-1_0.html). - The OpenID discovery endpoint must be available at `{Issuer}/.well-known/openid-configuration` - This endpoint must return a `jwks_uri` property with a URL where the public key used to sign the token can be obtained. There could be multiple keys returned by this endpoint, each key can be identified using the `kid` property. - Both of these endpoints must be publicly accessible without requiring authorization. :::div{.warning} Although the subject field does support wildcards, we recommend providing as explicit a value as possible to reduce the risk of malicious requests resulting in a subject match. For example, if you are generating OIDC tokens from GitHub Actions and want to match against any branch in your project repository, ensure your wildcard covers just the branch component of the subject `repo:AcmeOrg/MyRepo:ref:*`. Providing a single blanket `*` wildcard character otherwise means that any token request (with a matching `service_account_id`) from a GitHub Action from any organization could result in a match and an Octopus Authentication Token issued. ::: :::div{.hint} Public sites such as [jwt.io](https://jwt.io/) can be used to inspect and validate identity tokens. IMPORTANT: Identity tokens can be exchanged with your Octopus Server for an access token, be careful where you paste them! ::: ## Access tokens When an OIDC token from a trusted external system is validated, Octopus will issue an access token. This token is a Json Web Token (JWT) which is cryptographically signed by the Octopus server, allowing it to be validated to ensure it is a legitimate token that was issued from the correct system and hasn't been tampered with. The token is short-lived (1 hour) and cannot be used after it has expired, reducing the impact that stolen credentials could have. ### How tokens are signed Access tokens are signed using [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Octopus securely maintains a private and public key pair, and signs the token using the private key, which only the Octopus Server can use. The token can then be validated using the public key to ensure that it is legitimate. Access tokens are signed with [RSA keys]() with a key length of 2048 bits, using the [RSASSA-PSS (PS256) algorithm](https://www.rfc-editor.org/rfc/rfc8017#section-8.1). The keys used to sign access tokens are automatically rotated every 90 days, and a new key is used to sign tokens. Once a key has been rotated it is no longer used to sign new tokens, however continues to be used to validate existing tokens, in order to minimize disruption to the use of existing tokens. Keys will be removed after another 90 days and no longer used for validation. ### Validating tokens Octopus Server exposes well-known endpoints from the [OpenID discovery specification](https://openid.net/specs/openid-connect-discovery-1_0.html) to make available the public keys that are used to sign access tokens, which can then be used to validate access tokens that the Octopus Server issues. The discovery endpoint can be found at `{OctopusServerUrl}/.well-known/openid-configuration` e.g. `https://my.octopus.app/.well-known/openid-configuration`. The response from this endpoint will contain a `jwks_uri` property which contains the URL at which the public keys can be found. The jwks endpoint uses the [JWK specification](https://datatracker.ietf.org/doc/html/rfc7517). :::div{.hint} Public sites such as [jwt.io](https://jwt.io/) can be used to inspect and validate access tokens. IMPORTANT: Access tokens are credentials to your Octopus Server in the same way that API keys are, be careful where you paste them! ::: ## Older Versions - In versions prior to `2.1.0`, the Octopus CLI did not support OpenID Connect. # Using OpenID Connect with Octopus and GitHub Actions Source: https://octopus.com/docs/octopus-rest-api/openid-connect/github-actions.md Octopus has first-class support for using OpenID Connect (OIDC) within GitHub Actions when using the [`OctopusDeploy/login`](https://github.com/OctopusDeploy/login) action. :::div{.hint} Using OIDC to access the Octopus API is only supported for service accounts, to access the API for a user account please use [an API key](/docs/octopus-rest-api/how-to-create-an-api-key). ::: For more information on OIDC in GitHub Actions see [Security hardening with OpenID Connect](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect). ## Configuring an OIDC identity for GitHub Actions To configure an OIDC identity for a GitHub Actions workflow: 1. Go to Configuration -> Users and either create a new service account or locate an existing one. 2. Open the OpenID Connect section. 3. Click the New OIDC Identity button. 4. Select GitHub Actions as the issuer type. 5. Enter the details of your repository and how you want to filter the workflow runs that can authenticate using OIDC. 6. Click Save. :::div{.hint} Multiple OIDC identities can be added for a service account, these could be for workflow runs from the same repository, or separate repositories depending on your needs. ::: ### Filtering workflow runs The [`OctopusDeploy/login`](https://github.com/OctopusDeploy/login) action obtains an ID token from GitHub and then exchanges it for an Octopus access token. The ID token that GitHub generates contains a subject (the `sub` property in the ID token), which is generated based on the details of the workflow that is being run. The subject of the OIDC identity in Octopus needs to match this subject exactly in order for the access token to be issued, the Octopus Portal will help you to generate this subject correctly. The details of the subject that GitHub Actions will generate follow specific rules including: - Whether a GitHub `environment` is being used within the workflow - The trigger for the workflow run e.g. `pull_request` vs `push` - Whether the GitHub workflow is running for a branch or a tag For more information on the generation of subject claims see [Example subject claims](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#example-subject-claims). When configuring an OIDC identity for GitHub Actions you need to choose a filter that will match the workflow you want to be able to use OIDC with, the following options are available that match the subject claims above that GitHub uses: - Branch: Workflow runs for the specific branch will be allowed to connect using the OIDC identity. The prefix for the git ref does not need to be supplied e.g. Use `main` instead of `refs/heads/main`. - Environment: Workflow runs for the specific GitHub environment will be allowed to connect using the OIDC identity. - Pull Requests: Workflow runs triggered from pull requests will be allowed to connect using the OIDC identity. - Tag: Workflow runs for the specific tag will be allowed to connect using the OIDC identity. The prefix for the git ref does not need to be supplied e.g. Use `v1` instead of `refs/tags/v1`. To match multiple characters in a subject use `*`, and to match a single character use `?`. ### Customized subject claims GitHub supports [customizing the subject claims for an organization or repository](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-subject-claims-for-an-organization-or-repository), allowing other properties to be used in the generated subject of the ID token instead of the standard properties above. When configuring an OIDC identity for GitHub Actions you can click the Edit icon next to the subject to enter a custom subject matching the generated one from GitHub. :::figure ![Configuring a custom subject for GitHub Actions](/docs/img/octopus-rest-api/images/oidc-identity-github-actions-custom-subject.png) ::: ### GitHub Enterprise (self-hosted) When configuring an OIDC identity for GitHub Actions, by default the issuer URL will be set to the well-known issuer for GitHub Cloud: `https://token.actions.githubusercontent.com`. If you are using GitHub Actions from self-hosted GitHub Enterprise you can configure the Issuer URL by clicking the Edit icon and entering the URL. The URL must be HTTPS. :::figure ![Configuring an OIDC identity for self-hosted GitHub Enterprise](/docs/img/octopus-rest-api/images/oidc-identity-github-actions-enterprise.png) ::: ## Using `OctopusDeploy/login` in GitHub Actions workflows The [`OctopusDeploy/login`](https://github.com/OctopusDeploy/login) action provides a first-class way to use OIDC with Octopus in GitHub Actions, exchanging the GitHub ID token for an Octopus access token. Other Octopus actions (e.g. [`OctopusDeploy/create-release-action`](https://github.com/OctopusDeploy/create-release-action)) within the same workflow job will be pre-configured to use this access token, including any use of the [`octopus` cli](https://github.com/OctopusDeploy/cli) in scripts. See the [readme](https://github.com/OctopusDeploy/login) for more information on how to use the action. If you are using multiple jobs within a workflow that interact with Octopus the login action needs to be added to each job. ### Workflow job permissions To use the [`OctopusDeploy/login`](https://github.com/OctopusDeploy/login) action within a workflow job, a specific permission `id-token: write` needs to be granted to the job in order to obtain the ID token from GitHub, for example: ```yaml jobs: octopus: permissions: # Add any additional permissions your job requires here id-token: write # This is required to obtain the ID token from GitHub Actions contents: read # For example: this is required to check out code, remove if not needed steps: ... ``` When `permissions` are specified on a workflow job, any built-in permissions for the job are reset. This means that some existing steps in your workflow may now require setting explicit permissions in order to work correctly. For example to checkout source code using the `actions/checkout` action you will need to add `contents: read` to the permissions. For more information see [Assigning permissions to jobs](https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs/). ## Converting existing Octopus GitHub Actions workflows to use OIDC To convert existing Octopus GitHub Actions that are using API keys to instead use OIDC: - Create an OIDC identity on a service account for the GitHub Action as outlined above. - Copy the `OctopusDeploy/login` snippet that is generated for the service account and add it to the workflow job. - Add `id-token: write` permissions to the workflow job as outlined above. - Remove any existing usage of `server` and `api_key` from other Octopus actions. ### Example The following is an example of a simple GitHub Actions workflow using API keys. ```yaml name: Create a release in Octopus on: push: branches: - main jobs: create_release: runs-on: ubuntu-latest name: Create a release in Octopus steps: - name: Create Octopus release uses: OctopusDeploy/create-release-action@v3 with: server: https://my.octopus.app space: Default project: MyOctopusProject api_key: ${{ secrets.OCTOPUS_API_KEY }} ``` After conversion to use OIDC the workflow looks like: ```yaml name: Create a release in Octopus on: push: branches: - main jobs: create_release: runs-on: ubuntu-latest name: Create a release in Octopus permissions: # Add any additional permissions your job requires here id-token: write # This is required to obtain the ID token from GitHub Actions contents: read # For example: this is required to check out code, remove if not needed steps: - name: Log into Octopus uses: OctopusDeploy/login@v1 with: server: https://my.octopus.app service_account_id: 5be4ac10-2679-4041-a8b0-7b05b445e19e - name: Create Octopus release uses: OctopusDeploy/create-release-action@v3 with: space: Default project: MyOctopusProject ``` ## API keys It is recommended to use OIDC over API keys due to the benefits it provides, however the [`OctopusDeploy/login`](https://github.com/OctopusDeploy/login) action also supports using an API key, for scenarios where using OIDC is not available. When using an API key the remainder of the workflow job will be configured to use the Server URL and API key automatically via environment variables, eliminating the need to supply these to any other Octopus actions or to the `octopus` cli. See the [readme](https://github.com/OctopusDeploy/login?tab=readme-ov-file#api-key) for more information. ## Older Versions - Support for wildcards when matching a subject was added in Octopus 2024.1. # Using OpenID Connect in Octopus with other issuers Source: https://octopus.com/docs/octopus-rest-api/openid-connect/other-issuers.md Octopus supports using OpenID Connect for any external system that can issue a signed OIDC token which can be validated anonymously via an HTTPS endpoint. :::div{.hint} Using OIDC to access the Octopus API is only supported for service accounts, to access the API for a user account please use [an API key](/docs/octopus-rest-api/how-to-create-an-api-key). ::: ## Configuring an OIDC identity The first step is to create an OIDC identity for your issuer to access the Octopus API. 1. Go to Configuration -> Users and either create a new service account or locate an existing one. 2. Open the OpenID Connect section. 3. Click the New OIDC Identity button. 4. Select Other Issuer as the issuer type. 5. Enter the URL of the identity. Octopus uses OpenID Configuration Discovery to validate the OIDC token provided by the issuer. 1. The issuer URL must be HTTPS. 2. The URL should be the base where the OIDC Discovery endpoint (`/.well-known/openid-configuration`) endpoint can be found. For example if the discovery endpoint is `https://my-oidc-issuer.com/.well-known/openid-configuration` then the issuer should be set to `https://my-oidc-issuer.com`. 6. Enter the subject of the identity. This must match the subject that is provided in the OIDC token and is _case-sensitive_, wildcards for matching multiple characters `*` and single characters `?` can be used. The format of the subject will differ by issuer, please consult your OIDC issuers documentation. 7. Optionally enter a custom audience of the identity if required. 8. Click Save. :::div{.hint} Multiple OIDC identities can be added for a service account. ::: :::figure ![OIDC Identity for other issuer](/docs/img/octopus-rest-api/images/oidc-identity-other-issuer.png 'width=500') ::: ## OpenID discovery endpoints Octopus uses [OpenID Configuration Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html) to validate the OIDC token provided by the issuer. The issuer must provide an anonymously accessible endpoint `/well-known/openid-configuration` which meets the following specifications: - The URL must be secure i.e. it must use HTTPS. - The response must contain the `jwks_uri` property, which must be a URL. The `jwks_uri` endpoint must be an anonymously accessible endpoint which meets the following specifications: - The URL must be secure i.e. it must use HTTPS. - The response must contain a set of signing keys in the [JWK specification](https://datatracker.ietf.org/doc/html/rfc7517) which can be used to validate the OIDC token from the issuer. ## Exchanging an OIDC token for an Octopus access token {#OidcOtherIssuers-TokenExchange} To exchange the issuers OIDC token for an Octopus access token, a request can be made to a anonymously accessible endpoint in the Octopus Server. Octopus Server exposes a [OpenID Configuration Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html) at `/.well-known/openid-configuration`. The response from this endpoint will contain a `token_endpoint` which can be used to perform the exchange. The token exchange endpoint uses the [OAuth 2.0 Token Exchange](https://www.rfc-editor.org/rfc/rfc8693) specification: | Property | Value | | -------------- | --------------------------------------------------------- | | HTTP Method | POST | | Authentication | N | | Content-Type | `application/x-www-form-urlencoded` or `application/json` | A request to the endpoint requires the following properties: | Property | Value | | -------------------- | ----------------------------------------------------------------- | | `grant_type` | Must be set to `urn:ietf:params:oauth:grant-type:token-exchange`. | | `audience` | The id of the service account to exchange the OIDC token for. | | `subject_token_type` | Must be set to `urn:ietf:params:oauth:token-type:jwt`. | | `subject_token` | The signed OIDC token from the issuer. | If the request is successful, the response will contain the following properties: | Property | Value | | ------------------- | ----------------------------------------------------------------------------------------------------------- | | `access_token` | The Octopus access token which can be used to authenticate API requests. | | `token_type` | A string representing how the token should be passed to API request. This will always be set to `Bearer`. | | `issued_token_type` | The type of token being issued. This will always be set to `urn:ietf:params:oauth:token-type:access_token`. | | `expires_in` | The number of seconds until the token expires. | If the request is not successful, the response will contain the following properties: | Property | Value | | ------------------- | ---------------------------------------------------------------- | | `error` | The type of error. This will always be set to `invalid_request`. | | `error_description` | A description of the error. | ### `subject_token` The OIDC token must conform to the [JSON Web Token](https://datatracker.ietf.org/doc/html/rfc7519) standard and contain the following claims: | Claim | Value | Example | | ----- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ | | `iss` | The issuer of the token. This must match exactly the issuer on the OIDC identity. | https://my-oidc.issuer.com | | `sub` | The subject of the token. This must match exactly the subject on the OIDC identity. | scope:a-scope-to-restrict-the-usage | | `aud` | The audience of the token. This must match exactly the audience on the OIDC identity. Generally this will be the id of the service account to exchange the OIDC token for. | 863b4b7d-6308-456e-8375-8d9270e9be44 | | `exp` | The expiration time of the token. The token must not be expired. | 1632493567 | The OIDC token must be signed by the issuer, with the signature included as part of the token payload. ## Using the access token in API requests To use the access token as authentication for a request to the Octopus API, it must be included in the `Authorization` header using the `Bearer scheme`: ``` Authorization: Bearer {the-access-token-obtained-from-octopus} ``` ## Custom audience Some issuers may not be able to generate an OIDC token with the id of the Octopus service account set in the audience (`aud`) field. Examples of this include when connecting to Octopus from a custom application running in Azure. When configuring an OIDC identity for another issuer, the audience can be set to a custom string. Click the edit icon next to the Audience field to do this. :::figure ![OIDC Identity with custom audience](/docs/img/octopus-rest-api/images/oidc-identity-other-issuer-custom-audience.png 'width=500') ::: ## Older Versions - Support for wildcards when matching a subject was added in Octopus 2024.1. # Bamboo Source: https://octopus.com/docs/packaging-applications/build-servers/bamboo.md :::div{.warning} As of December 2025 the Octopus Deploy add-on for Bamboo has reached end of life (EOL), in line with Atlassian's end of life timeline plans for [Data Center products](https://www.atlassian.com/software/bamboo/download-archives). Alternative features to flow artifacts from your CI system into Octopus are; [external feed triggers](/docs/projects/project-triggers/external-feed-triggers) and [Built-in package repository triggers](/docs/projects/project-triggers/built-in-package-repository-triggers). If you are an Atlassian Data Center user and Octopus customer and need help reach out to [Octopus Support](https://octopus.com/support). ::: The Octopus Deploy add-on for Bamboo allowed packages to be uploaded to an Octopus Server, as well as creating, deploying and promoting releases to your Octopus Deploy [environments](/docs/infrastructure/environments/). The add-on does this by running the [Octopus CLI](/docs/octopus-rest-api/octopus-cli). ## Getting started The plugin relies on a local copy of the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) being available to the build agent. The command line tool can be downloaded from the [Octopus Deploy Download page](https://octopus.com/downloads). Note that while the command line tool package is largely self-contained, some Linux distributions require additional libraries to be installed before .NET Core applications will run. These packages are documented at the [Get started with .NET Core](https://www.microsoft.com/net/core) website. To verify that the command line tool can be run, execute it from a terminal. When run with no arguments, the `Octo` executable will display a list of available commands. ## Installing the add-on Follow the instructions at [Installing add-ons](https://confluence.atlassian.com/upm/installing-marketplace-apps-273875715.html) to install the Octopus Deploy Bamboo add-on. ## A typical workflow for pushing packages and deploying releases There are a number of typical steps that are required to push a package to Octopus Deploy and deploy a release: 1. Build the application with Bamboo. 2. Create a package that contains the application files. 3. Push the package to Octopus Deploy. 4. Create a release in Octopus Deploy. 5. Deploy a release with Octopus Deploy. ## 1. Build the application We'll assume that there is already a Bamboo build plan in place that successfully builds an application. ## 2. Create the package With the application built, we need to add it to an archive that complies with the Octopus Deploy [versioning requirements](/docs/packaging-applications/create-packages/versioning). In this example we will stick to a simple `AppName.Major.Minor.Patch` SemVer format. Creating the package is done with the `Octopus Deploy: Pack Packages` task. In addition to the [common configuration fields](#commonConfiguration), this task requires the name of the package, the type of package to create, the version number of the package, the base folder containing the files to be packaged, paths to be included in the package, and enabling any existing package files to be overwritten. This steps runs the [pack command](/docs/packaging-applications/create-packages/octopus-cli) on the command line tool. :::div{.hint} If you are building .NET applications on an instance of Bamboo hosted on Windows, you may prefer to use [OctoPack](/docs/packaging-applications/create-packages/octopack) to build a package instead of manually packaging the application with the `Octopus Deploy: Pack Packages` task. ::: ### Package ID The `Package ID` field defines the name or ID of the package to be created. In this example we will use the ID `myapplication`. ### Version number The `Version number` fields defines the version of the package to create. This field is optional, but it is highly recommended that the version be generated from the Bamboo build number. We will set the version to `0.0.${bamboo.buildNumber}`. ### Package format The `Package format` options allow you to build either a ZIP or a NUGET file. ZIP is the recommended format. ### Package base folder The `Package base folder` option defines the base folder that contains the files that are to be packed up. For a Java application built by Maven, the files will typically be found under the folder `${bamboo.build.working.directory}/target`. For a Java application built by Gradle, the files will typically be found under the folder `${bamboo.build.working.directory}/build/libs`. For a .NET application the files will typically be found under a folder like `${bamboo.build.working.directory}/myapplication/bin/Release/netcoreapp1.1`. ### Package include paths The `Package include paths` field lists the files that are to be packed into the package. For a Java web application you would typically pack the WAR file, which can be included with the path `*.war`. For .NET applications you would typically be packing all application files like executables, config files and DLLs so leave this blank unless you wish to specify a specific set of files. ### Overwrite existing package Selecting the `Overwrite existing package` option means that any existing local packages will be overwritten. It is useful to select this option because it means that packages can be repacked without error if the Bamboo build plan is rerun. :::figure ![Create a package](/docs/img/packaging-applications/build-servers/images/create-package.png) ::: ## 3. Push the packages Pushing the package to Octopus Deploy is done with the `Octopus Deploy: Push Packages` task. In addition to the [common configuration fields](#commonConfiguration), this task requires the paths to the packages to be pushed and forcing package uploads. This step runs the [push command](/docs/octopus-rest-api/octopus-cli/push) on the command line tool. ### Package paths The `Package paths` field defines the [Ant paths](https://ant.apache.org/manual/dirtasks.html) that are used to match packages to be pushed to Octopus Deploy. The Ant path `**/*${bamboo.buildNumber}.zip` matches the zip file created with during the previous step. :::div{.hint} Note that it is recommended that the package paths defined here are specific to the build. While the Ant path `**/*.zip` does match the package, it also match any old packages that might have been created in previous builds and not cleaned up. This means these less specific paths can result in old packages being uploaded, which is usually not the desired result. ::: ### Overwrite mode The `Overwrite mode` option can be used to control what should happen if the package already exists in the repository; the default behavior is to reject the new package being pushed (`FailIfExists`). You can override this behavior by using either the `OverwriteExisting` or `IgnoreIfExists` overwrite mode. :::figure ![Push Package](/docs/img/packaging-applications/build-servers/images/push-package.png) ::: ## 4. Create a release Creating a release is done with the `Octopus Deploy: Create Release` task. In addition to the [common configuration fields](#commonConfiguration), this task requires the Octopus Deploy project to create the release for and the version number of the release. This steps runs the [create-release command](/docs/octopus-rest-api/octopus-cli/create-release) on the command line tool. ### Project The `Project` field defines the name of the [Octopus Deploy project](/docs/projects) that the release will be created for. ### Release number The `Release Number` field defines the version number for the release. Although this field is optional, it is highly recommended that the release number be tied to the Bamboo build number e.g. `0.0.${bamboo.buildNumber}`. The reason for this is Bamboo allows you to rebuild old builds, and if the `Release number` is not defined it will be assigned a default version number in Octopus Deploy. This can lead to a situation where build number 10 in Bamboo is rebuilt, and release number like 0.0.128 is created in Octopus Deploy, which is almost certainly not the desired result. ### Environment(s) The `Environment(s)` field defines the [Octopus Deploy environments](/docs/infrastructure/environments) that the new release is to be deployed to. It is recommended that this field be left blank, because the `Ignore existing releases` option needs to be enabled to allow builds to be rebuilt, and if the environment already exists and the `Ignore existing releases` option is enabled no deployments will take place. We'll use a dedicated step to handle deployments. ### Ignore existing releases The `Ignore existing releases` option can be selected to skip the create release step if the release version already exists. Tick this option, as it allows builds to be rebuilt. Otherwise, rebuilds will attempt to recreate an existing environment and the step will fail. :::figure ![Create Release](/docs/img/packaging-applications/build-servers/images/create-release.png) ::: ## 5. Deploy a release Releases can be deployed with the `Octopus Deploy: Deploy Release` task. In addition to the [common configuration fields](#commonConfiguration), this task requires the Octopus Deploy project to deploy, the environments to deploy to, and the release number to deploy. This steps runs the [deploy-release command](/docs/octopus-rest-api/octopus-cli/deploy-release) on the command line tool. ### Project The `Project` field defines the name of the [Octopus Deploy project](/docs/projects) that the deployment will be done for. ### Environment(s) The `Environment(s)` field defines the [Octopus Deploy environments](/docs/infrastructure/environments) that the release is to be deployed to. ### Release number The `Release Number` field defines the release version number to deploy. This should match the release number from the create release step i.e. `0.0.${bamboo.buildNumber}`. :::figure ![Deploy release](/docs/img/packaging-applications/build-servers/images/deploy-release.png) ::: ## Promote a release (optional, and not recommended) Releases can be promoted to new environments with the `Octopus Deploy: Promote Release` task. In addition to the [common configuration fields](#commonConfiguration), this task requires the Octopus Deploy project to deploy, the environment to promote from, and the environment to promote to. This steps runs the [promote-release command](/docs/octopus-rest-api/octopus-cli/promote-release) on the command line tool. :::div{.warning} Because the promotion from one environment to another is not tied to any particular release number, adding this task to a Bamboo build plan means every time the plan is run (or more importantly rerun), releases will be promoted between environments. This is almost certainly not the desired result, and so it is not recommended that promotions be done as part of a Bamboo build plan. ::: ### Project The `Project` field defines the name of the [Octopus Deploy project](/docs/projects) that the deployment will be done for. ### Promote from This `Promote from` field defines the environment whose release will be promoted to the `Promote to` environment. ### Promote to This `Promote to` field defines the environment whose release will be promoted from the `Promote from` environment. ## Common configuration All Octopus Deploy tasks share a number of common configuration fields. ### Octopus URL The `Octopus URL` field defines the URL of the Octopus Server that the package will be pushed to. This URL must include the scheme `http:\\` or `https:\\`, and also include the port if it is not the default of `80` or `443`. ### API key The `API key` field defines the API key that is used to authenticate with the Octopus Server. See [How to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key) for more information. ### Octopus CLI The `Octopus CLI` field references a [Bamboo capability](https://confluence.atlassian.com/bamboo/capability-289277445.html) that defines the path to the Octopus Deploy Command Line tool. Click the `Add new executable` link to specify the location of the command line tool. The `Executable label` can be anything you want, and the `Path` is the full path to the command line tool executable file. :::figure ![Add new executable](/docs/img/packaging-applications/build-servers/images/executable.png) ::: ### Enable debug logging The `Enable debug logging` option is used to enable detailed logging from the command line tool. ### Additional command line arguments The `Additional command line arguments` field is used to specify additional arguments to pass to the command line tool. You can find more information on the arguments accepted by the command line tool on the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) page. ## Using Bamboo deployment plans The Octopus Deploy add-on tasks can be used either in Bamboo build or deployment plans. Where you use these tasks is up to you. If you already have a number of environments set up in Bamboo, it may make sense to create and deploy Octopus Deploy releases from the Bamboo deployment plan. Doing so allows you to retain the familiar Bamboo build and deployment workflow, while having Octopus Deploy do the actual deployment. The recommended task sequence for a deployment project in Bamboo is this: 1. A `Octopus Deploy: Push Packages` task in the Bamboo build plan with a package version number linked to the Bamboo build number and the `Force overwrite existing packages` selected. 1. A `Octopus Deploy: Create Release` task in the Bamboo deployment plan with a `Release number` linked to the Bamboo build number, the `Ignore existing releases` option selected, and no `Environments(s)` set to deploy to. 2. A `Octopus Deploy: Deploy Release` task in the Bamboo deployment plan with a `Release number` linked to the Bamboo build number. These steps will allow packages to be pushed and re-pushed, and new releases to be created, deployed and rolled back to previous releases. ## Troubleshooting ### Unexpected behavior in deployment plans There are some issues to keep in mind when using the Octopus Deploy add-on tasks from a Bamboo deployment project. The first issue is that the `Octopus Deploy: Create Release` task is only suitable for creating and optionally deploying new releases, not rolling back to previous releases. Consider these following scenarios: 1. The create release task is defined with no release number. Each time it is run, or rerun via a rollback initiated via the Bamboo deployment project, this task will create a new release in Octopus Deploy. This is not appropriate behavior for a Bamboo deployment project. 2. The create release task is defined with a fixed release number related to the Bamboo build. To allow this task to be rerun without error, the `Ignore existing releases` option needs to be selected. When `Ignore existing releases` is selected, the create release task is essentially skipped during a rerun, meaning no deployment is done. This is not the expected behavior of a rollback initiated via the Bamboo deployment project. The second issue is that the `Octopus Deploy: Promote Release` task may not work as you expect when used with a Bamboo deployment plan. Because the promotion from one environment to another is not dependent on any release versions, every time this step is run it will attempt to promote a release forward in Octopus Deploy, even if the task was run as part of a rollback. For this reason it is recommended that the promote release task not be used as part of either a Bamboo build or deployment plan. ### Octopus command line tool failed to run in Linux The Octopus Command Line tool packages for Linux are relatively self-contained, but depending on your Linux distribution you may need to install some additional dependencies for the command line tool to run. For example, in Centos 7 you might see this error: ``` Failed to load /tmp/libcoreclr.so, error: libunwind.so.8: cannot open shared object file: No such file or directory Failed to bind to CoreCLR at '/tmp/libcoreclr.so' ``` The solution is to install the packages detailed at the [Get started with .NET Core](https://www.microsoft.com/net/core) website. ``` sudo yum install libunwind libicu ``` ### Manually running the command line tool The Bamboo build logs show how the command line tool is run. Look for log messages like this: ``` running command line: \n/opt/octocli/Octo push --server http://localhost --apiKey API-....................XXXXXX --replace-existing --debug --package /opt/atlassian-bamboo-6.0.0/xml-data/build-dir/BPT-TBD-JOB1/myapplication.0.0.5.tar.gz ``` This is the command that was run to perform the actual interaction with the Octopus Server, with the exception of the redacted API key. You can take this command and run it manually to help diagnose any issues. ### Bamboo variables A number of the Bamboo step fields in this document have used Bamboo variables to reference build numbers and local paths. You can find a list of variables exposed by Bamboo at the [Bamboo Variables](https://confluence.atlassian.com/bamboo/bamboo-variables-289277087.html) page. ## Error codes Error conditions encountered by the add-on have unique error codes, which are listed here. | Error Code | Description | |------------|-------------| | OCTOPUS-BAMBOO-INPUT-ERROR-0001 | No matching files could be found to push to Octopus Deploy. Check that the file pattern matches a file in the Bamboo working directory. | | OCTOPUS-BAMBOO-INPUT-ERROR-0002 | A required field was empty. | | OCTOPUS-BAMBOO-INPUT-ERROR-0003 | The server capability that defines the path to the Octopus CLI has an incorrect path. Make sure The path you assigned to the Octopus CLI is correct. | ## Learn more - Generate an Octopus guide for [Bamboo and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?buildServer=Bamboo). # Troubleshooting OctoPack Source: https://octopus.com/docs/packaging-applications/create-packages/octopack/troubleshooting-octopack.md Sometimes OctoPack doesn't work the way you expected it to, or perhaps you are having trouble configuring your `.nuspec` file. Here are some steps to help you diagnose what is going wrong, and fix the problem. 1. Run the build in your local development environment using the Visual Studio developer command prompt using arguments something like this: ```powershell msbuild MySolution.sln /t:Build /p:Configuration=Release /p:RunOctoPack=true /fl ``` The `/p:RunOctoPack=true` argument configures OctoPack to run as part of the build process The `/fl` argument configures `msbuild.exe` to write the output to a log file which will usually look like `msbuild.log`. Refer to the [MSBuild documentation](https://msdn.microsoft.com/en-us/library/ms171470.aspx) for more details. Note: You may need to change some of these parameters to match the process you are using on your build server. Take a look at the build server logs and try to emulate the process as closely as possible. 2. Inspect the [MSBuild output log file](https://msdn.microsoft.com/en-us/library/ms171470.aspx). If OctoPack has executed successfully you should see log entries like the ones shown below generated using OctoPack 3.0.42: ```powershell Target "OctoPack" in file "c:\dev\MyApplication\source\packages\OctoPack.3.0.42\tools\OctoPack.targets" from project "c:\dev\MyApplication\source\MyApplication.Web\MyApplication.Web.csproj" (target "Build" depends on it): Using "GetAssemblyVersionInfo" task from assembly "c:\dev\MyApplication\source\packages\OctoPack.3.0.42\tools\OctoPack.Tasks.dll". Task "GetAssemblyVersionInfo" OctoPack: Get version info from assembly: c:\dev\MyApplication\source\MyApplication.Web\bin\MyApplication.Web.dll Done executing task "GetAssemblyVersionInfo". Task "Message" Using package version: 0.0.0.0 Done executing task "Message". Using "CreateOctoPackPackage" task from assembly "c:\dev\MyApplication\source\packages\OctoPack.3.0.42\tools\OctoPack.Tasks.dll". Task "CreateOctoPackPackage" OctoPack: ---Arguments--- OctoPack: Content files: 12 OctoPack: ProjectDirectory: c:\dev\MyApplication\source\MyApplication.Web OctoPack: OutDir: bin\ OctoPack: PackageVersion: 0.0.0.0 OctoPack: ProjectName: MyApplication.Web OctoPack: PrimaryOutputAssembly: c:\dev\MyApplication\source\MyApplication.Web\bin\MyApplication.Web.dll OctoPack: NugetArguments: OctoPack: NugetProperties: OctoPack: --------------- OctoPack: Written files: 299 OctoPack: Create directory: c:\dev\MyApplication\source\MyApplication.Web\obj\octopacking OctoPack: Create directory: c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked OctoPack: Copy file: c:\dev\MyApplication\source\MyApplication.Web\MyApplication.Web.nuspec OctoPack: Packaging an ASP.NET web application (Web.config detected) OctoPack: Add content files OctoPack: Added file: content\images\favicon.ico OctoPack: Added file: Web.config ... OctoPack: Add binary files to the bin folder OctoPack: Added file: bin\MyApplication.Web.dll.config OctoPack: Added file: bin\MyApplication.Web.dll OctoPack: Added file: bin\MyApplication.Web.pdb ... OctoPack: NuGet.exe path: c:\dev\MyApplication\source\packages\OctoPack.3.0.42\tools\NuGet.exe OctoPack: Running NuGet.exe with command line arguments: pack "c:\dev\MyApplication\source\MyApplication.Web\obj\octopacking\MyApplication.Web.nuspec" -NoPackageAnalysis -BasePath "c:\dev\MyApplication\source\MyApplication.Web" -OutputDirectory "c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked" -Version 0.0.0.0 OctoPack: Attempting to build package from 'MyApplication.Web.nuspec'. OctoPack: Successfully created package 'c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked\MyApplication.Web.0.0.0.0.nupkg'. OctoPack: Packaged file: c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked\MyApplication.Web.0.0.0.0.nupkg OctoPack: Copy file: c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked\MyApplication.Web.0.0.0.0.nupkg OctoPack: Packages have been copied to: c:\dev\MyApplication\source\MyApplication.Web\bin\ OctoPack: OctoPack successful Done executing task "CreateOctoPackPackage". Task "Message" Built package: c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked\MyApplication.Web.0.0.0.0.nupkg Done executing task "Message". Task "Message" NuGet.exe: c:\dev\MyApplication\source\packages\OctoPack.3.0.42\tools\NuGet.exe Done executing task "Message". Task "Message" Publish to file share: ..\..\artifacts Done executing task "Message". Task "Copy" Copying file from "c:\dev\MyApplication\source\MyApplication.Web\obj\octopacked\MyApplication.Web.0.0.0.0.nupkg" to "..\..\artifacts\MyApplication.Web.0.0.0.0.nupkg". Done executing task "Copy". Task "Message" skipped, due to false condition; ('$(OctoPackPublishPackageToHttp)' != '') was evaluated as ('' != ''). Task "Exec" skipped, due to false condition; ('$(OctoPackPublishPackageToHttp)' != '') was evaluated as ('' != ''). Done building target "OctoPack" in project "MyApplication.Web.csproj". ``` * If you cannot see any OctoPack-related log messages, perhaps OctoPack isn't installed into your project(s) correctly? * Try completely uninstalling OctoPack and installing it again. * Check inside your `.csproj` or `.vbproj` file for an include statement like the following example: ```powershell ``` * If OctoPack is running but your files are not being packed correctly, see if the file is mentioned in the build log. * Files that are copied to the build output directory will be included in the package. Take a look at the contents of your build output directory and compare that with the messages in the build log. * For web applications, files that are configured with the Visual Studio property **Build Action: Content** will be included in the package. * If you have specified the `` element in a custom `.nuspec` file, perhaps you need to add the `/p:OctoPackEnforceAddingFiles=true` MSBuild argument as discussed above? * If you have specified the `` element in a custom `.nuspec` file, perhaps you need to experiment with some different combinations of include and exclude? ## Next - [Packaging applications](/docs/packaging-applications) - [Use the Octopus CLI to create packages](/docs/packaging-applications/create-packages/octopus-cli) - Use [OctoPack to Include BuildEvent files](/docs/packaging-applications/create-packages/octopack/octopack-to-include-buildevent-files) - [Troubleshooting OctoPack](/docs/packaging-applications/create-packages/octopack/troubleshooting-octopack) - [Package deployments](/docs/deployments/packages) # Create packages with the Octopus CLI Source: https://octopus.com/docs/packaging-applications/create-packages/octopus-cli.md The Octopus CLI (`octopus`) is a command line tool that interacts with the [Octopus Deploy REST API](/docs/octopus-rest-api/) and includes packaging commands to create packages either as [Zip](#create-zip-packages) or [NuGet](#create-nuget-packages) packages for deployment with Octopus. ## Installation The [Octopus CLI downloads page](https://github.com/OctopusDeploy/cli/blob/main/README.md#installation) provides installation options for various platforms. After installation, you can run the following to verify the version of the Octopus CLI that was installed (if you're using Windows, remember to open a new command prompt): ``` octopus --version ``` For more installation details, options, and update instructions, see [The Octopus CLI Global Tool](/docs/octopus-rest-api/cli). For a full list of the `package` command options see [Octopus CLI - Package](/docs/octopus-rest-api/cli/octopus-package) or run the following command: ```powershell octopus package --help ``` ## Usage The Octopus CLI supports two package formats: NuGet packages and ZIP packages ## Configuration Options Both NuGet and ZIP packaging commands support the following configuration options: - **--id**: The ID of the package - **--version**: The version of the package, must be a valid SemVer - **--base-path**: Root folder containing the contents to zip - **--out-folder**: Folder into which the zip file will be written - **--include**: Add a file pattern to include, relative to the base path e.g. /bin/*.dll; defaults to "**" - **--verbose**: Verbose output - **--overwrite**: Allow an existing package file of the same ID/version to be overwritten Additional NuGet-specific options: - **--author**: Add author/s to the package metadata - **--title**: The title of the package - **--description**: A description of the package, defaults to "A deployment package created from files on disk." - **--releaseNotes**: Release notes for this version of the package - **--releaseNotesFile**: A file containing release notes for this version of the package ### Create NuGet packages {#create-nuget-packages} Basic usage: ```powershell octopus package nuget create ``` ### Create ZIP packages {#create-zip-packages} Basic usage: ```powershell octopus package zip create ``` ## Packaging a .NET Core application To package a .NET core application, first publish the application, and then call `octopus package` on the output folder for example: ```powershell dotnet publish ./OctoWeb.csproj --output ./dist octopus package zip create --id="OctoWeb" --version="1.0.0" --base-path="./dist" ``` Please refer to [Microsoft's publish and packing](/docs/deployments/dotnet/netcore-webapp/#publishing-and-packing-the-website) documentation for more information. ## Packaging a .NET Core library If you are using .NET Core for class libraries, we recommend using [dotnet pack from Microsoft](https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-pack). ```powershell dotnet pack ./SomeLibrary.csproj --output ./dist octopus package zip create --id="SomeLibrary" --version="1.0.0" --base-path="./dist" ``` ## Packaging a .NET Framework web application There are usually some extra steps required to get the resulting application built and deployable. Full framework web applications are a good example of this, where simply building the application will not give you the desired output. We still recommend [Octopack](/docs/packaging-applications/create-packages/octopack) for these cases. However, you may be able to achieve this using msbuild parameters such as: ``` msbuild ./OctoWeb.csproj /p:DeployDefaultTarget=WebPublish /p:DeployOnBuild=true /p:WebPublishMethod=FileSystem /p:SkipInvalidConfigurations=true /p:publishUrl=dist octopus package zip create --id="OctoWeb" --version="1.0.0-alpha0001" --base-path="./dist" ``` ## Packaging your application from a folder If you have a build process that places all build outputs into a final destination folder (such as gulp, grunt, or webpack), you can package it using the Octopus CLI as well. For example, if you've defined an npm script which runs your build and places all associated content into the `dist` folder: ```powershell npm run build octopus package zip create --id="OctoWeb" --version="1.0.0" --base-path="./dist" ``` ## Known issues with other compression libraries {#known-issues} These are known issues to be aware of with other compression libraries: - Atlassian Bamboo users who are using [Adam Myatt's Zip File Task](https://bitbucket.org/adammyatt/bamboo-zip-file-tasks) and are extracting to a Linux machine may find that the contents don't get extracted into the correct folder structure but instead flattened with the path as the file name. This is the result of a [known issue](https://bitbucket.org/adammyatt/bamboo-zip-file-tasks/issues/4/change-request-use-forward-slashes-as-file) whereby the task does not confirm to the correct [PKWARE ZIP §4.4.17.1](https://help.octopus.com/t/octopus-deploy-to-linux-vm/2047 "Link outside Support: https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT") specifications and is using a back slash instead of forward slash as the file separator. We would recommend avoiding this task where possible. - Prior to the .NET framework 4.6.1, the *System.IO.Compression* library incorrectly preserved the Windows-style back slash separator for file paths. This has since been fixed from [.NET Framework 4.6.1](https://msdn.microsoft.com/en-us/library/mt712573) and the fix carried over into [.NET Core](https://github.com/dotnet/corefx/commit/7b9331e89a795c72709aef38898929e74c343dfb). - The PKZIP specification requires that Zip files only need to store dates in the internal file headers with two bytes in the [MS-DOS format](https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html) (whereas tar file headers are stored in [UNIX epoch format](http://www.gnu.org/software/tar/manual/html_node/Standard.html)). This means that unless the compression library makes use of extra fields in the file headers, that a file compressed at some point in time on a machine in one timezone, may result in misleading dates when uncompressed in a different timezone. ## Learn more - [Packaging application](/docs/packaging-applications) - [Create packages with Octopack](/docs/packaging-applications/create-packages/octopack). - [TeamCity plugin](/docs/packaging-applications/build-servers/teamcity). - [Azure DevOps plugin](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension). - [Package repositories](/docs/packaging-applications). - [Package deployments](/docs/deployments/packages). # Package repositories Source: https://octopus.com/docs/packaging-applications/package-repositories.md When planning your Octopus installation, you need to decide how to host your packages. Your [build server](/docs/packaging-applications/build-servers) should create your packages and publish them to a package repository. The Octopus Server includes a [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository) and supports the following external repositories: - [Docker feeds](/docs/packaging-applications/package-repositories/docker-registries). - [GitHub feeds](/docs/packaging-applications/package-repositories/github-feeds). - [Maven feeds](/docs/packaging-applications/package-repositories/maven-feeds). - [NPM feeds](/docs/packaging-applications/package-repositories/npm-feeds). - [NuGet feeds](/docs/packaging-applications/package-repositories/nuget-feeds). - [AWS S3 Bucket feeds](/docs/packaging-applications/package-repositories/s3-feeds). - [Google Cloud Storage feeds](/docs/packaging-applications/package-repositories/gcs-feeds). - Helm feeds. - AWS ECR feeds. - OCI-based registry feeds. Octopus can consume packages from multiple feeds at once if necessary. Your package repository will typically be: - The [built-in Octopus repository](/docs/packaging-applications/package-repositories/built-in-repository). - A [remote feed](http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds#Creating_Remote_Feeds) exposed over HTTP. - A [local NuGet feed](http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds#Creating_Local_Feeds) exposed as a File Share or local directory. - A [maven feed](/docs/packaging-applications/package-repositories/maven-feeds). - A [JetBrains TeamCity](http://blogs.jetbrains.com/dotnet/2011/08/native-nuget-support-in-teamcity/) server (version 7 and above). - A [MyGet](http://www.myget.org/) server. - An [Azure DevOps or TFS Package Management](/docs/packaging-applications/package-repositories/guides/nuget-repositories/tfs-azure-devops). ## Choosing the right repository {#choose-right-repository} Because Octopus can consume packages from multiple feeds, we recommend using different repositories for different purposes as each repository provides different benefits. For instance, if you produce your own application library packages in addition to your deployment packages you might consider something like the following: - Use the [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository/) for your deployment packages. This is generally the best choice as it offers better performance and through the [retention policies](/docs/administration/retention-policies) you've configured, Octopus knows which packages are no longer required and can be cleaned up. - For application library packages consider using the repository provided by your [build server](/docs/packaging-applications/build-servers), a [file-share](http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds#Creating_Local_Feeds), [MyGet](http://www.myget.org/ "MyGet"), or [Azure DevOps Package Management](https://www.visualstudio.com/en-us/docs/package/overview). - For deployment scripts that you want to store in your source control and where a build process is unnecessary, [GitHub feeds](/docs/packaging-applications/package-repositories/github-feeds) might be suitable. ## Planning package repository placement By default, when you [deploy a package](/docs/deployments/packages) to a Tentacle, the package will be pushed from the Octopus Server to the Tentacle. You can override this by changing the setting of the [Action System Variable](/docs/projects/variables/system-variables/#action) `Octopus.Action.Package.DownloadOnTentacle` from `False` to `True`. When set to `True` the package will be downloaded by the Tentacle, rather than pushed by the Octopus Server. To reduce network latency, when your package repository is in close proximity to the Octopus Server leave `Octopus.Action.Package.DownloadOnTentacle` set to the default value of `False`. Alternatively, if you have explicitly set the Tentacles to download packages by the Tentacle to `True`, you should consider placing your package repository in close proximity to your Tentacles. # GitHub Repository feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/github-feeds.md GitHub exposes a set of APIs that allow Octopus to treat it as a feed of packages. In this scenario an Octopus package maps to a specific GitHub repository (e.g the https://github.com/OctopusDeploy/Calamari repository is referred to in Octopus as `OctopusDeploy/Calamari`). Git tags are used to denote [package versions](/docs/packaging-applications/create-packages/versioning/). Tags that can be parsed as [SemVer 2.0](http://semver.org/spec/v2.0.0.html) can be treated as candidates for an Octopus Deploy release. If a tag is also linked to a specific [GitHub Release](https://help.github.com/articles/about-releases), then those release notes will be treated as the release notes for the package. When searching for a package, either through the package selector on a deployment step, or when testing a feed, the package naming scheme allows for several different ways to search through the GitHub repositories. * `"node"` : Searches for repositories with **any** owner that contain **"node"**. * `"nodejs/"`: Searches for repositories with **"nodejs"** owner with **any** name. * `"nodejs/node"`: Searches for repositories with **"nodejs"** owner that contain **"node"**. ## Auth Only the following authentication methods are supported. * Anonymous: Under this configuration the username and password fields can be left blank. There are lower request throttle limits when using anonymous authentication so it is generally not recommended. * Username/Password. * OAuth2 Token: [Personal access tokens](https://github.com/blog/1509-personal-api-tokens) can be used instead of your password. If you're attempting to configure access for your organization, and you would prefer not to use the auth token from a particular user, you can create what GitHub refers to as a [Machine User](https://developer.github.com/v3/guides/managing-deploy-keys/#machine-users). This is effectively a GitHub account configured exclusively for automation. ## Adding a GitHub feed Create a GitHub package feed through **Deploy ➜ Manage ➜ External Feeds**. You can add as many GitHub feeds as you need. Each can have different credentials if required. In most cases the `FeedUri` that you will need to provide is the standard public GitHub endpoint `https://api.github.com`. You would only need to provide a different url if you have self-hosted GitHub Enterprise (in which case you would provide `https://my-github-repo.com/api/v3`) or if you access GitHub via a proxy. For authorization, it is recommended that you create a [Personal access tokens](https://github.com/blog/1509-personal-api-tokens) for your account and use this token as the password. Tokens can be created for your GitHub account by logging in to GitHub and navigating to **Settings ➜ Developer Settings ➜ Personal access tokens** and click **Generate new token**. :::figure ![GitHub Personal Access Token](/docs/img/packaging-applications/package-repositories/images/github-personalaccesstoken1.png) ::: ![GitHub Personal Access Token](/docs/img/packaging-applications/package-repositories/images/github-personalaccesstoken2.png) Give the token a meaningful name and enable the **repo** scope if you want to be able to access private repositories from Octopus Deploy. Copy the token that is generated and use this value as the password for the GitHub feed in Octopus Deploy. :::div{.hint} **Note:** Octopus can make use of [GitHub's fine-grained tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#fine-grained-personal-access-tokens) should you wish to have more control over the repositories that can be accessed. These can be created in the same way as the steps above, but selecting the **Fine-grained tokens** from the navigation menu. From there individual repository access can be added for further security. ::: ### Testing a GitHub feed You can check whether the GitHub feed is working by searching for packages. Click the **TEST** button, and you'll be taken to the test page: :::figure ![GitHub Feed Test search](/docs/img/packaging-applications/package-repositories/images/github-feed-test.png) ::: :::div{.hint} **Note:** When testing a GitHub Feed, the **Version** field will not be displayed. This is due to the way Octopus queries the GitHub [repository search API](https://docs.github.com/en/rest/reference/search#search-repositories) which doesn't return release tags. This was an intentional decision implemented for performance reasons. ::: ## Using GitHub as a package feed 1. Add your GitHub feed as described above. 2. In the console of your git repository which has a GitHub remote link, create a tag with a SemVer 2.0 compliant version and push this tag to GitHub. ```bash git tag 1.0.0 git push --tags ``` 2. Optionally add release notes to the tagged commit from within GitHub. (Note additional resources currently do not get included in the Octopus deployment). The pre-release state of a release is also tied to the pre-release component of the tag name. :::figure ![GitHub release notes](/docs/img/packaging-applications/package-repositories/images/github-releasenotes.png) ::: If Octopus can link a particular version (which in the context of GitHub feeds refers to a tag) to a release, then the release notes will be exposed through the Octopus Deploy portal. At this point in time the `This is a pre-release` check-box on the GitHub Release will be ignored in favor of the pre-release state indicated in the version itself. Additionally, artifacts are not currently retrieved as part of an Octopus deployment, however this may become available in the future. 3. _(Note: Any steps that currently support zips and NuGet packages can also use GitHub as the feed source, but for the purpose of this example we will run a script)_ From within Octopus Deploy, create a project with a [`Run a Script`](/docs/deployments/custom-scripts/run-a-script-step/#choosing-where-to-source-scripts) step. Under `Script Source` check the `Package` option. Select the GitHub feed source as the package feed and enter the full name of the repository where the required files are located. In the case of https://github.com/OctopusDeploy/Calamari this would be represented as `OctopusDeploy/Calamari`. Under `Script File` provide the path to the script that you want to run along with any parameters that you want to pass in. ![GitHub Script Source](/docs/img/packaging-applications/package-repositories/images/github-scriptsource.png) 4. When you create a new release Octopus will query the GitHub api to determine the list of tags which can be parsed as SemVer 2 versions. As with standard package feeds the latest version will be selected by default and any [channel version rules](/docs/releases/channels/#version-rules) will be applied. 5. When the release is deployed and the [package acquisition](/docs/deployments/packages/stage-package-uploads) process begins, Octopus will pull down a copy of the repository based on the commit linked to the tag selected as the package version. This artifact is then treated as a zip and is deployed using the standard package deployment rules that applied previously. ## Deployments without a build This new GitHub feed support is a perfect addition to support your CI processes where a build process to create a package would be unnecessary. It could be a repository that contains just a bunch of scripts or cloud provisioning templates that you want Octopus to execute but that you would prefer to be in source control and where a build process makes no sense. Perhaps you have a simple Node.js project that you just want to deploy directly from your source control without the ceremony of a build and package step. In this case you may want to invoke `npm install` in a [post-deploy script](/docs/deployments/custom-scripts/scripts-in-packages). # AWS Elastic Container Registry (ECR) Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries/amazon-ec2-container-services.md AWS provides a Docker Image registry, known as [Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) . Support for EC2 Container registries is provided as a special feed type itself. :::div{.warning} The credentials used for ECR feeds [only last 12 hours](http://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html). This may not be suitable for long-lived container workloads. ::: ## Configuring an AWS Elastic Container Registry (ECR) From the AWS Services dashboard go to `Elastic Container Registry`. ![AWS Services](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/aws-services.png) Under the `Repositories` area you need to create a repository to match the what in Octopus-speak would be the PackageId. This should map to your distinct application image. If you attempt to push an image during your build process to this registry without first creating the corresponding repository you will receive an error. :::figure ![AWS Registries](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/aws-registries.png) ::: With the repository configured, ensure that you also have an [AWS IAM](https://aws.amazon.com/iam/) user available that has at a minimum the permissions `ecr:GetAuthorizationToken`, `ecr:DescribeRepositories`, `ecr:DescribeImages` and `ecr:ListImages`. This user is the account which Octopus will use to retrieve the Docker login token which is then used to perform the appropriate Docker commands. Further links for getting your AWS registry set up are available in their [online docs](http://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) ## Adding AWS ECR as an Octopus External Feed Create a new Octopus Feed (**Deploy ➜ Manage ➜ External Feeds**) and select the `AWS Elastic Container Registry` Feed type. With this selected you will need to provide the credentials configured above, as well as the region at which the registry was created. In AWS you are able to maintain separate repositories in each region. :::figure ![AWS EC2 container service registry feed](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/aws-ecr-feed.png) ::: Save and test your registry to ensure that the connection is authorized successfully. ## Using worker configured credentials From Octopus Server `2025.2`, you can now use worker configured credentials by setting `Execute using the credentials configured on the worker` to `Yes` when creating your AWS ECR feed. :::div{.warning} If your AWS credentials are not set up on server, package search and package version resolution will not find any results. You can still enter your full package name and version for package acquisition on your worker. ::: ## Adding an AWS OpenID Connect ECR External feed Octopus Server `2025.2` adds support for OpenID Connect to ECR feeds. To use OpenID Connect authentication you have to follow the [required minimum configuration](/docs/infrastructure/accounts/openid-connect#configuration). The configuration of 1. Navigate to **Deploy ➜ Manage ➜ External Feeds**, click the **Add Feed** and select **AWS Elastic Container Registry**. 2. Add a memorable name for the account. 3. Set the **Audience** to the audience of the identity provider in AWS. 4. Set the **Role ARN** to the ARN from the identity provider associated role. 5. Click **SAVE** to save the account. 6. Before you can test the account you need to add a condition to the identity provider in AWS under **IAM ➜ Roles ➜ {Your AWS Role} ➜ Trust Relationship** : ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::{aws-account}:oidc-provider/{your-identity-provider}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "example.octopus.app:sub": "space:[space-slug]:feed:[slug-of-feed-created-above]", "example.octopus.app:aud": "example.octopus.app" } } } ] } ``` 7. Go back to the AWS feed in Octopus and click **TEST**, search for a package in any ECR feeds the role has access to. Refer to the AWS docs for [more information on the role permissions required for ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/images.html). Please read [OpenID Connect Subject Identifier](/docs/infrastructure/accounts/openid-connect#subject-keys) on how to customize the **Subject** value. By default, the role trust policy does not have any conditions on the subject identifier. To lock the role down to particular usages you need to modify the [trust policy conditions](https://oc.to/aws-iam-policy-conditions) and add a condition for the `sub`. For example, to lock an identity role to a specific Octopus environment, you can update the conditions: ```json "Condition": { "StringEquals": { "example.octopus.app:sub": "space:default:feed:feed-slug", "example.octopus.app:aud": "example.octopus.app:" } } ``` `default` and `feed-slug` are the slugs of their respective Octopus resources. AWS policy conditions also support complex matching with wildcards and `StringLike` expressions. # Maven repositories Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/maven-repositories.md This section provides instructions how to set-up a number of Maven repositories from third-parties as external feeds for use within Octopus. # ProGet Maven repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/maven-repositories/proget-maven-feed.md ProGet from Inedo is an package repository technology which contains a number of different feed types. This guide provides instructions on how to create a private container registry in ProGet and connect it to Octopus Deploy as an External Feed. ## Configuring a ProGet Registry From the ProGet web portal, click on **Feeds ➜ Create New Feed** :::figure ![Create New Feed](/docs/img/packaging-applications/package-repositories/images/proget-create-feed.png) ::: Select the **Maven Artifacts** option from the `Developer Libraries` category :::figure ![Container Images](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/proget-new-maven-feed.png) ::: Select **No Connectors (private artifacts only)** from the wizard :::figure ![No Connectors](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/proget-maven-no-connectors.png) ::: Enter a name for your Feed, eg: ProGet-Docker, then click **Create Feed** The next screen allows you to set optional features for your registry, configure these features or click **Close**. Once the feed has been created, ProGet will display the `API endpoint URL` to push packages. In this example it's `https://proget.octopusdemos.app/maven2/ProGet-Maven/` :::figure ![API endpoint Url](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/proget-maven-api-endpoint.png) ::: ## Adding a ProGet Maven as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `Maven Feed` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the ProGet server: `https://your.proget.url/maven2/feedname/` :::figure ![ProGet Maven Feed](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/proget-external-feed.png) ::: Optionally add Credentials if they are required. # GitHub NuGet repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories/github-nuget-feed.md GitHub projects come with a built-in NuGet package registry that can be configured as an External Feed for Octopus Deploy. The NuGet package registry is present by default and does not require any configuration on GitHub to be enabled. :::figure ![GitHub Project Id](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/github-nuget-package-registry.png) ::: :::div{.info} Note: The **NuGet Feed** type discussed here is different from the [GitHub Feed](/docs/packaging-applications/package-repositories/github-feeds) type. ::: ## NuGet Package Registry Permissions The GitHub Package Registry requires authentication in order to download packages, even if the repository is marked as Public. To configure the External Feed, you will first need to create a GitHub Personal Access Token (PAT) with the `read:packages` permission :::figure ![GitHub Personal Access Token](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/github-pat-permissions.png) ::: Once the token has been created, store it in a safe place. ## Adding a GitHub NuGet repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `NuGet Feed` Feed type. Give the feed a name and in the URL field, enter the URL of the feed for your GitHub NuGet Package Registry in the following format: `https://nuget.pkg.github.com/YourGitHubAccountOrOrganizationName/index.json` Replace `YourGitHubAccountOrOrganizationName` with your GitHub account or Organization name. :::figure ![GitHub NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/github-octopus-add-nuget-feed.png) ::: Enter username for the access token you created and use the token itself as the password ![GitHub NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/github-octopus-feed-credentials.png) # Health check step Source: https://octopus.com/docs/projects/built-in-step-templates/health-check.md Octopus periodically runs health checks on deployment targets and workers to ensure that they are available and running the latest version of Calamari as part of a [machine policy](/docs/infrastructure/deployment-targets/machine-policies). However, often it can be useful to check the health of deployment targets when executing a runbook or deployment, particularly with [dynamic infrastructure](/docs/infrastructure/deployment-targets/dynamic-infrastructure/) and [transient deployment targets](/docs/deployments/patterns/elastic-and-transient-environments/deploying-to-transient-targets). This can be achieved using the _Health Check_ step. :::figure ![Health check step search](/docs/img/projects/built-in-step-templates/images/health-check-step-search.png) ::: This step allows a deployment target that was created in the currently executing deployment to be confirmed as healthy and then added to the running deployment for subsequent steps. Similarly, it allows you to confirm that the Tentacle service on a deployment target is running prior to attempting to perform an action against it. ## Configure a health check step Health check steps are added to deployment and runbook processes in the same way as other steps: 1. Add a new *Health Check* step to your [project's deployment process](/docs/projects/steps). ![Health check step](/docs/img/projects/built-in-step-templates/images/health-check-step-select.png) 2. Select the [target tags](/docs/infrastructure/deployment-targets/target-tags) that match the deployment targets you want to run a health check against. 3. In the **Health check** section, select an option for **Health check type**: - Perform a full health check - this will run the [health check script](/docs/infrastructure/deployment-targets/machine-policies/#custom-health-check-scripts) defined by the machine policy. - Perform a connection-only test - this only checks the machine is available (connected). For **Health check errors**, select which action to take on a health check error: - Fail the deployment (default). - Skip deployment targets that are unavailable. 4. In the **Machine Selection** section, select which action to take for any new machines found as a result of the health check: - Ignore any newly available deployment targets (default) - Include new deployment targets in the deployment - This option is recommended in dynamic deployments that involve targets that are created as part of the _current deployment_. ## Maximum number of concurrent health checks There is a limit to the number of concurrent health checks possible when running the health check step. This ensures that the step doesn't adversely affect the performance of your Octopus Server. The number of concurrent health checks will be double the Octopus Server's logical processor count which is a minimum of 2 and will not exceed 32. ## Health check for workers While the built-in *Health check* step works for deployment targets, it was not designed for [workers](/docs/infrastructure/workers). To check the health of a worker in a deployment or runbook, there is a [Worker - Health check](https://library.octopus.com/step-templates/c6c23c7b-876d-4758-a908-511f066156d7/actiontemplate-worker-health-check) community step template. # Steps Source: https://octopus.com/docs/projects/steps.md Steps contain the actions your deployment process will execute each time you create a release of your software to be deployed. Steps can contain multiple actions and deployment processes can include multiple steps. Steps are executed in sequence by default or you can configure [conditions](/docs/projects/steps/conditions) to control where and when steps run. Octopus includes [built-in step templates](/docs/projects/built-in-step-templates/) that have been developed by the Octopus team to handle the most common deployment scenarios. In addition to the built-in step templates, there are also [Community Step Templates](/docs/projects/community-step-templates/) that have been contributed by the community. You can also use the built-in step templates as the base to create [custom steps templates](/docs/projects/custom-step-templates) to use across your projects. ## Adding steps to your deployment processes 1. Navigate to your [project](/docs/projects). 2. Click the **Create process** button. 3. Find the step template you need and click **Add step**. At this point, you have the choice of choosing from the built-in **Installed Step Templates** or the [Community Contributed Step Templates](/docs/projects/community-step-templates). If you're looking for example deployments, see the [deployment examples](/docs/deployments#getting-started-with-deployments). 4. Give the step a short memorable name. 5. The **Execution Location** tells the step where to run. Depending on the type of step you are configuring the options will vary: - [Worker pool](/docs/infrastructure/workers/worker-pools) - Worker pool on behalf of roles - Deployment targets 6. If you are deploying to deployment targets or running the step on the server on behalf of deployment targets, you can deploy to all targets in parallel (default) or configure a rolling deployment. To configure a rolling deployment click *Configure a rolling deployment* and specify the window size for the deployment. The window size controls how many deployment targets will be deployed to in parallel. Learn more about [rolling deployments](/docs/deployments/patterns/rolling-deployments-with-octopus). 7. The next section of the step is where you specify the actions for the step to take. If you are running a script or deploying a package, this is where you provide the details. This section will vary depending on the type of step you're configuring. If you're deploying packages you'll likely need to set your [configuration variables](/docs/projects/steps/configuration-features/xml-configuration-variables-feature). 8. After providing the actions the steps takes, you can set the conditions for the step. You can set the following conditions: - Only run the step when deploying to specific environments. - Only run the step when deploying a release through a specific [channel](/docs/releases/channels). - Set the step to run depending on the status of the previous step. - Set when package acquisition should occur. - Set whether or not the step is required. Learn more about [conditions](/docs/projects/steps/conditions). 9. Add additional steps. 10. Save the deployment process. With your deployment process configured you're ready to create a [release](/docs/releases). ## Reordering steps To reorder steps in a deployment or runbook process: 1. Click into a step in the process. 1. Click on the overflow menu (...) next to the **Filter by name** text box. 1. Select the **Reorder Steps** option. ![Reorder steps menu](/docs/img/projects/steps/images/overflow-reorder.png) 1. This will open a drag and drop pane to sort your steps in the desired order. ![Reorder steps pane](/docs/img/projects/steps/images/overflow-reorder-pane.png) ## Example: A simple deployment process In the example shown below there are three steps that will be executed from top to bottom. The first is a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals/) which executes on the Octopus Server pausing the deployment until someone intervenes and allow the deployment to continue. This step will only execute when targeting the Production [environment](/docs/infrastructure/environments/). The remaining steps both [deploy a package](/docs/deployments/packages/) and execute [custom scripts](/docs/deployments/custom-scripts/) on all [deployment targets](/docs/infrastructure) with the [tag](/docs/infrastructure/deployment-targets/target-tags) **web-server**. :::figure ![A simple deployment process](/docs/img/projects/steps/images/simple-process.png) ::: ## Example: A rolling deployment process Let's consider a more complex example like the one shown below. In this example we have configured Octopus to deploy a web application across one or more servers in a web farm behind a load balancer. This process has a single step and three actions which form a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus). ![A Rolling Deployment](/docs/img/projects/steps/images/rolling-process.png) # Environment specific .NET configuration transforms with sensitive values Source: https://octopus.com/docs/projects/steps/configuration-features/configuration-transforms/environment-specific-transforms-with-sensitive-values.md It is possible to combine the configuration features that you use in your deployments. One scenario where this is useful is if you need to provide environment specific configuration that includes sensitive values. This can be achieved using both the [Substitute Variables in Templates](/docs/projects/steps/configuration-features/substitute-variables-in-templates/) feature and the [.NET Configuration Transforms](/docs/projects/steps/configuration-features/configuration-transforms) features. ## One transform and variable replacement For example, let's assume we have a web application that's being deployed to Development, Staging, and Production environments, and you want to change your `Web.Config` file to reflect environment-specific values. To achieve this you would have a single configuration transformation file in your project. If it's named `Web.Release.Config`, the transformation will be applied to your `Web.Config` file automatically, however you can have your own filename and apply it to any config file you like. This transform file can contain `#{variable}` values. Because your config will only get transformed on deployment, you can safely work with your `Web.Config` file during development, and you can keep sensitive variables like production passwords out of source control. ### The process It's important to note that variable substitution occurs before the configuration transformation. This means you need to target your transform files for variable substitution by adding them to the **Target files** setting. For example, let's assume our `Web.Config` file has a `MyDatabaseConnection` connection string and a special `MyCustomSettingsSection` element. Something like this: ```xml True ``` We also have a `Web.Release.Config` transform file with the following contents: ```xml #{RunTestMode} ``` Finally, we have the following [variables](/docs/projects/variables) configured in Octopus: | Name | Value | Scope | | ------------- | ------- | ------ | | OctoFXDatabase | server=staging-server;Database=OctoFX;Trusted_connection=SSPI | Staging | | OctoFXDatabase | server=(local)\SQLEXPRESS;Database=OctoFX-Development;Trusted_connection=SSPI | Development | | OctoFXDatabase | server=prod-server;Database=OctoFx;Trusted_connection=SSPI | Production | | RunTestMode | False | Production, Staging | | RunTestMode | True | Development | On deployment to your Staging environment, your process would go like this: 1. Your package, complete with your original `Web.Config` and your `Web.Release.Config` transform file, will be extracted to the target. 2. Variable Substitution will run against your `Web.Release.Config` file (assuming it's been listed in the Target files setting). This will change the `#{OctoFXDatabase}` string to the Staging connection string, and will insert `False` into the `TestMode` element. 3. Then, the .NET configuration transformation feature will run and apply this new transform file to your `Web.Config`. The end result is a correctly transformed configuration for your staging environment. All without a specific Staging transform file, and while keeping your `Web.Config` file clean for development. # Execution containers for workers Source: https://octopus.com/docs/projects/steps/execution-containers-for-workers.md For a [step](/docs/projects/steps/) running on a [worker](/docs/infrastructure/workers/) or on the [Octopus Server](/docs/infrastructure/workers/built-in-worker), you can select a Docker image to execute the step inside of. When an execution container is configured for a step, Octopus will still connect to the worker machine via a [Tentacle or SSH](/docs/infrastructure/workers/#register-a-worker-as-a-listening-tentacle). The difference is that the specified image will be run as a container and the step will be executed inside the container. See the [blog post](https://octopus.com/blog/execution-containers) announcing this feature for some added context. ## Requirements You need Docker installed and running on the [worker](/docs/infrastructure/workers/)/Octopus Server ([built-in worker](/docs/infrastructure/workers/built-in-worker)), in order to use execution containers for workers ### Octopus cloud dynamic worker pools [Octopus Cloud dynamic workers](/docs/infrastructure/workers/dynamic-worker-pools) have Docker pre-installed and support execution containers. :::figure ![](/docs/img/projects/steps/execution-containers-for-workers/images/hosted-worker-pools-execution-containers.png) ::: ## How to use execution containers for workers - Configure a [feed](/docs/packaging-applications/package-repositories/docker-registries) in Octopus Deploy for a Docker registry. - [Add Docker Hub as an external feed](https://octopus.com/blog/build-a-real-world-docker-cicd-pipeline#add-docker-hub-as-an-external-feed). - Add a project and define a deployment process (or add a [runbook](/docs/runbooks)). - Set the **Execution Location** for your step to **Run on a worker**. - In **Container Image** select **Run inside a container on a worker** and then **Pull container from a registry**. - Choose the previously added container registry. - Enter the name of the image (execution container) you want your step to run in. (e.g. `octopusdeploy/worker-tools:ubuntu.22.04`). - Click **Save**. - Click **Create release & deploy**. :::figure ![](/docs/img/projects/steps/execution-containers-for-workers/images/container-selector-image.png) ::: ## First deployment on a Docker container :::div{.hint} Pre-pulling your chosen image will save you time during deployments. ::: When you choose to run one or more of your deployment steps in a container, your deployment process will `docker pull` the image you provide at the start of each deployment during package acquisition. For your first deployment this may take a while since your Docker image won't be cached. You can pre-pull the desired Docker image on your worker before your first deployment to avoid any delays. ## Which Docker images can I use? \{#which-image} :::div{.hint} The easiest way to get started is to use the [worker-tools](#worker-tools-images) images built by Octopus Deploy. ::: When a step is configured to use an execution container, you can choose from: - One of the [worker-tools](#worker-tools-images) images built by Octopus Deploy. - A [custom Docker image](#custom-docker-images) you build. If you run into issues with the provided [worker-tools](#worker-tools-images) images or they don't meet your needs, you will have to create a custom image. Take a look at the [custom Docker image](#custom-docker-images) section or the [blog post on extending execution containers](https://octopus.com/blog/extending-octopus-execution-container) to learn how to create a custom image. ### The octopusdeploy/worker-tools Docker images \{#worker-tools-images} For convenience, we provide some images on Docker Hub [octopusdeploy/worker-tools](https://hub.docker.com/r/octopusdeploy/worker-tools) which include common tools used in deployments. :::div{.hint} We recommend using our `worker-tools` image as a starting point for your own custom image to run on a worker. ::: The canonical source for what is contained in the `octopusdeploy/worker-tools` images is the `Dockerfile`'s in the [GitHub repo](https://github.com/OctopusDeploy/WorkerTools). For example: - The [Ubuntu 22.04 Dockerfile](https://github.com/OctopusDeploy/WorkerTools/blob/master/ubuntu.22.04/Dockerfile) - The [Windows 2022 Dockerfile](https://github.com/OctopusDeploy/WorkerTools/blob/master/windows.ltsc2022/Dockerfile) Some of the tools included are: - Octopus Deploy CLI and .NET client library - .NET Core - Java (JDK) - NodeJS - Azure CLI - AWS CLI - Google Cloud CLI - kubectl - Helm - Terraform - Python ### Custom Docker images \{#custom-docker-images} It can be beneficial to build your own custom Docker image when using execution containers, particularly when you wish the image size to be as small as possible. #### Supported Windows base images For Windows images, we recommend using a base image no older than the `ltsc2022` image. Octopus does not support images that are older than `ltsc2022`, and while containers based on these images can still run steps, you may run into unexpected issues. :::div{.hint} If your containers are based on an earlier image of Windows, we strongly recommend upgrading your workers to Windows 2022 and rebasing your Docker containers to use a 2022 base image. ::: #### Supported Linux distributions It's important to understand there are some limits to which Linux Docker images can be used as a container image. The Docker image must be based on a Linux distribution using the GNU C library, or **glibc**. This includes operating systems like Ubuntu, Debian, and Fedora. :::div{.warning} Linux distributions built on **musl**, most notably Alpine, do not support Calamari, and cannot be used as a container image. This is due to Calamari currently only being compiled against **glibc** and not **musl**. ::: You can usually find the base operating system of a Linux Docker image by running the following command: ```bash docker run --entrypoint='' [image name] /bin/cat /etc/os-release. ``` For example for the `octopusdeploy/worker-tools:5.0.0-ubuntu.22.04` image, you'd run: ```bash docker run --entrypoint='' octopusdeploy/worker-tools:5.0.0-ubuntu.22.04 /bin/cat /etc/os-release. ``` #### Required OS dependencies The operating system must also include a number of dependencies required to support .NET Core applications. When a step is configured to use an execution container, [Calamari](/docs/octopus-rest-api/calamari) (the Octopus deployment utility) is executed inside the specified container. Since Calamari is a .NET Core self-contained executable, any custom Docker image needs to include the dependencies required to execute a .NET self-contained executable. The Microsoft [.NET Core documentation](https://docs.microsoft.com/en-us/dotnet/core/install/linux) lists the dependencies required for a .NET Core application with popular Linux distributions. :::div{.hint} If a third party container is missing a library, it is usually the **libicu** library. The error **Couldn't find a valid ICU package installed on the system** indicates the ICU library is missing. ::: If your chosen Docker image does not have these prerequisites, the easiest solution is to create a custom Docker image based on the image you wish to use, install the required libraries, push the image to a repository like DockerHub, and select your custom image as the container image. Microsoft also provides [base images that include these dependencies](https://hub.docker.com/r/microsoft/dotnet-runtime-deps). #### Custom Docker image example The following example is a basic Docker file that (when built) can run Calamari and PowerShell scripts: ```docker FROM ubuntu:20.04 ARG POWERSHELL_VERSION=7.1.3\* RUN apt-get update && \ apt-get install -y --no-install-recommends \ curl \ unzip \ apt-transport-https \ software-properties-common && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* # PowerShell Core # https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.1#ubuntu-2004 RUN curl -LO -k "https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb" && \ dpkg -i packages-microsoft-prod.deb && \ apt-get update && \ add-apt-repository universe && \ apt-get install -y --no-install-recommends \ powershell=${POWERSHELL_VERSION} && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* && \ rm -f packages-microsoft-prod.deb ``` To learn more about creating a custom Docker image, we have a [detailed blog post](https://octopus.com/blog/extending-octopus-execution-container) that describes how to get started and the minimum set of dependencies you would need. #### Inline execution containers :::div{.warning} Support for inline execution containers requires Octopus version 2024.1 ::: To improve the experience of using custom Docker images on steps in Octopus we're expanding our execution containers feature to include options to provide the Dockerfile from a [Git URL](#docker-image-from-git-url) or an [Inline Dockerfile](#docker-image-from-inline-dockerfile) that will be built and used as the container that the step is executed on. This will make it easier and quicker to iterate on building a Docker image best suited to different scenarios. :::div{.hint} For your first deployment this may take a while since we're building the Docker container image for the first time, subsequent runs will be quicker as the image is cached (as long as it runs on the same worker). ::: ##### Docker image from a Git URL \{#docker-image-from-git-url} :::figure ![](/docs/img/projects/steps/execution-containers-for-workers/images/container-selector-git-url.png) ::: The **Docker image from a Git URL** option allows you to use a Dockerfile stored in a Git repository to build the image that the step in Octopus will run on. See the [Docker documentation](https://docs.docker.com/engine/reference/commandline/image_build/#git-repositories) for information about what context configuration is accepted in the Git URL. ##### Docker image from an inline Dockerfile \{#docker-image-from-inline-dockerfile} :::figure ![](/docs/img/projects/steps/execution-containers-for-workers/images/container-selector-docker-file.png) ::: The **Docker image from an inline Dockerfile** option allows you to write the container specification used to build the image that the step in Octopus will run on directly on the step in your deployment process. This option allows you to iterate quickly on building a container specification that will work for the step. #### Tool paths Because Calamari is executed directly inside the specified container, execution containers on workers are run in **non-interactive** mode. Since the execution container is not running interactively, it does not process your `.bashrc` file. If the tool you have installed relies on `.bashrc` to modify the path (e.g. `nvm`) to include a non-standard folder, you will need to manually define the additional directories in the `$PATH` variable in your Dockerfile using the `ENV PATH` directive. For example, if you install node.js via nvm, you will need to remediate the `$PATH` variable in your image with the location node.js gets installed to with the following directive in your Dockerfile: ```bash ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin:$PATH" ``` #### CMD and ENTRYPOINT directives Docker images used with the execution containers feature require that no `CMD` or `ENTRYPOINT ` directives be defined in your Dockerfile. Including one of these directives will result in the step failing. ## Collecting artifacts with execution containers You can collect Octopus [artifacts](/docs/projects/deployment-process/artifacts) from steps used with the execution containers feature. The source file for the artifact must be saved and collected from the **fully qualified path** of one of the directories (or subdirectories) mapped into the execution container as a volume. The recommended volume to use is the temporary directory created within the `/Work` workspace, for example, `/etc/octopus/Tentacle/Work/20221128114036-119427-56`. Once the artifact has been collected, the directory and its contents will be removed once the step has been executed. Its value can be found in the `PWD` environment variable. The following script would collect an artifact called `foo.txt` from the temporary working directory using the `$PWD` environment variable:
Bash ```bash Bash echo "Hello" > $PWD/foo.txt new_octopusartifact $PWD/foo.txt ```
PowerShell ```powershell PowerShell "Hello" > "$($PWD)/foo.txt" New-OctopusArtifact "$($PWD)/foo.txt" ```
## Supplying arguments to the container runtime Specifying a variable named `Octopus.Action.Container.Options` allows configuring the options supplied to the container runtime (i.e. Docker). For example: | Name | Value | | --- | --- | | Octopus.Action.Container.Options | `--security-opt label:level:TopSecret` | For the list of available options see the [Docker run options](https://docs.docker.com/engine/reference/commandline/run/#options). The `Octopus.Action.Container.Options` variable can be [scoped](/docs/projects/variables/getting-started/#scoping-variables). Support for `Octopus.Action.Container.Options` was introduced in Octopus version **2023.2.13224** # Output variables Source: https://octopus.com/docs/projects/variables/output-variables.md As you work with [variables](/docs/projects/variables) in Octopus, there will be times when you want to use dynamic variables, for example, the value of a variable is the result of a calculation, or the output from running a command. For these scenarios, Octopus supports **output variables**. Output variables can be set anywhere that Octopus runs scripts - for example, the [Script Console](/docs/administration/managing-infrastructure/script-console/), or [package scripts and script steps](/docs/deployments/custom-scripts) in a deployment. *See below for examples of setting output variables in each of the different scripting languages supported by Octopus.* For example, you might have a standalone [PowerShell script step](/docs/deployments/custom-scripts) called **StepA** that does something like this:
PowerShell ```powershell Set-OctopusVariable -name "TestResult" -value "Passed" ```
C# :::div{.warning} On 30 September, 2022 it was [announced](https://github.com/scriptcs/scriptcs/issues/1323) that ScriptCS would no longer be maintained. As of `2024.4` ScriptCS is being deprecated in Octopus. This has been replaced with [dotnet-script](https://github.com/dotnet-script/dotnet-script). See our post on migrating from [scriptcs to dotnet-script](https://g.octopushq.com/ScriptCSDeprecation). All support for ScriptCS will be removed in `2025.3`. ::: ```csharp SetVariable("TestResult", "Passed"); ```
Bash ```bash set_octopusvariable "TestResult" "Passed" ```
F# ```fsharp Octopus.setVariable "TestResult" "Passed" ```
Python3 ```python set_octopusvariable("TestResult", "Passed") ```
You can then use the variable from other steps, either in [variable binding syntax](/docs/projects/variables/variable-substitutions): ```powershell #{Octopus.Action[StepA].Output.TestResult} ``` Or other scripts:
PowerShell ```powershell $TestResult = $OctopusParameters["Octopus.Action[StepA].Output.TestResult"] ```
C# ```csharp var testResult = OctopusParameters["Octopus.Action[StepA].Output.TestResult"] ```
Bash ```bash testResult=$(get_octopusvariable "Octopus.Action[StepA].Output.TestResult") ```
F# ```fsharp let testResult = Octopus.findVariable "Octopus.Action[StepA].Output.TestResult" ```
Python3 ```python testResult = get_octopusvariable("Octopus.Action[StepA].Output.TestResult") ```
## Sensitive output variables
PowerShell ```powershell PowerShell Set-OctopusVariable -name "Password" -value "correct horse battery staple" -sensitive ```
C# ```csharp SetVariable("Password", "correct horse battery staple", true); ```
Bash ```bash set_octopusvariable "Password" "correct horse battery staple" -sensitive ```
F# ```fsharp Octopus.setSensitiveVariable "Password" "correct horse battery staple" ```
Python3 ```python set_octopusvariable("Password", "correct horse battery staple", True) ```
## System output variables \{#system-output-variables} After a step runs, Octopus captures the output variables, and keeps them for use in subsequent steps. In addition to variables that you create yourself using `Set-OctopusVariable`, Octopus also makes a number of built-in variables available. Here are some examples of commonly used built-in output variables: - For NuGet package steps: - `Octopus.Action[StepName].Output.Package.InstallationDirectoryPath` - the path that the package was deployed to - For manual intervention steps: - `Octopus.Action[StepName].Output.Manual.Notes` - notes entered in response to the manual step - `Octopus.Action[StepName].Output.Manual.ResponsibleUser.Id` - `Octopus.Action[StepName].Output.Manual.ResponsibleUser.Username` - `Octopus.Action[StepName].Output.Manual.ResponsibleUser.DisplayName` - `Octopus.Action[StepName].Output.Manual.ResponsibleUser.EmailAddress` ## Output from multiple deployment targets \{#multiple-target-output} Output variables become more complex when multiple deployment targets are involved, but they can still be used. Imagine that an output variable was set by a script which ran on two deployment targets (Web01 and Web02) in parallel, and that both set it to a different value. Which value should be used in subsequent steps? In this scenario, the following output variables would be captured: | Name | Value | Scope | | ---------------------------------------- | -------- | -------------- | | `Octopus.Action[StepA].Output[Web01].TestResult` | `Passed` | | | `Octopus.Action[StepA].Output[Web02].TestResult` | `Failed` | | | `Octopus.Action[StepA].Output.TestResult` | `Passed` | Deployment Target: Web01 | | `Octopus.Action[StepA].Output.TestResult` | `Failed` | Deployment Target: Web02 | | `Octopus.Action[StepA].Output.TestResult` | `Passed` | | | `Octopus.Action[StepA].Output.TestResult` | `Failed` | | Note that for each output variable/deployment target combination: - A variable is created with the deployment target name contained in the variable name: this allows you to reference output variables set by one deployment target from another deployment target. - A variable is created that is [scoped](/docs/projects/variables/getting-started/#scoping-variables) to the deployment target. This way Web01 will always get the value Web01 set, and Web02 will get the value Web02 set. - A variable is created with no scope, and no differentiator in the name. When referencing this value, the result will be non-deterministic, but it allows scripts to use the value without knowing which deployment target set it. For some practical examples of using output variables, and how scoping rules are applied, see the following blog posts: - [Fun with output variables](https://octopus.com/blog/fun-with-output-variables) - [Changing website ports using output variables](https://octopus.com/blog/changing-website-port-on-each-deployment) ## Output from a Deploy a Release step \{#deploy-release-output} Output variables from deployments triggered by a _Deploy a Release_ step are captured and exposed as output variables on the _Deploy a Release_ step. To get the value of an output variable from a _Deploy a Release_ step, use the `Output.Deployment` variable on the _Deploy a Release_ step. For example, if your _Deploy a Release_ step is named "Deploy Web Project", the target step in the child project is named "Update IP Address", and the variable name is "IPAddress", you would use the following variable to access it in the parent project: `Octopus.Action[Deploy Web Project].Output.Deployment[Update IP Address].IPAddress`. ## Setting output variables using scripts \{#output-variables-in-scripts} You can set output variables using any of the scripting languages supported by Octopus. In each case we make special functions available to your scripts by bootstrapping them with a template defined in the [open-source Calamari project](https://github.com/OctopusDeploy/Calamari). ### PowerShell \{#powershell} [PowerShell Bootstrapping](https://github.com/OctopusDeploy/Calamari/tree/main/source/Calamari.Common/Features/Scripting/WindowsPowerShell/) From a PowerShell script, you can use the PowerShell CmdLet `Set-OctopusVariable` to set the name and value of an output variable. The CmdLet takes two parameters: - `[string]$name` - the name you want to give the output variable following the same naming conventions used for input [variables](/docs/projects/variables). - `[string]$value` - the value you want to give the output variable. For example: **PowerShell** ```powershell Set-OctopusVariable -name "TestResult" -value "Passed" ``` ### C# \{#csharp} :::div{.warning} On 30 September, 2022 it was [announced](https://github.com/scriptcs/scriptcs/issues/1323) that ScriptCS would no longer be maintained. As of `2024.4` ScriptCS is being deprecated in Octopus. This has been replaced with [dotnet-script](https://github.com/dotnet-script/dotnet-script). See our post on migrating from [scriptcs to dotnet-script](https://g.octopushq.com/ScriptCSDeprecation). All support for ScriptCS will be removed in `2025.3`. ::: [Dotnet Script Bootstrapping](https://github.com/OctopusDeploy/Calamari/tree/main/source/Calamari.Common/Features/Scripting/DotnetScript) From a C# script, you can use the `public static void SetVariable(string name, string value)` method to set the name and value of an output variable. **C#** ```csharp SetVariable("TestResult", "Passed"); ``` ### Bash \{#bash} [Bash Bootstrapping](https://github.com/OctopusDeploy/Calamari/tree/main/source/Calamari.Common/Features/Scripting/Bash) In a Bash script you can use the `set_octopusvariable` function to set the name and value of an output variable. This function takes two positional parameters with the same purpose as the PowerShell CmdLet. **Bash** ```bash set_octopusvariable "TestResult" "Passed" ``` ### F# \{#fsharp} [FSharp Bootstrapping](https://github.com/OctopusDeploy/Calamari/tree/main/source/Calamari.Common/Features/Scripting/FSharp) From a F# script, you can use the `setVariable : name:string -> value:string -> unit` function to collect artifacts. The function takes two parameters with the same purpose as the PowerShell CmdLet. **F#** ```fsharp Octopus.setVariable "TestResult" "Passed" ``` **Python3** ```python Python3 set_octopusvariable("TestResult", "Passed") ``` ## Best practice If you have multiple steps which depend on an output variable created by a previous step in your deployment process, it can be cumbersome to need to use the full variable name everywhere, e.g. `Octopus.Action[StepA].Output.TestResult`. A useful pattern is to create a project variable which evaluates to the output variable, e.g. | Variable name | Value | | ---------------------------------------- | -------- | | `TestResult` | `#{Octopus.Action[StepA].Output.TestResult}` | This allows using `TestResult` as the variable name in dependent steps, rather than the full output variable name. In the case of the step name changing (e.g. `StepA` -> `StepX`), this also reduces the amount of places the step name in the output variable expression needs to be changed. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # Editing a project with version control enabled Source: https://octopus.com/docs/projects/version-control/editing-a-project-with-version-control-enabled.md Once an Octopus Project is configured to be version-controlled, your experience making changes to a project will change. With the configuration as code feature, you can either edit the resources via the Octopus Deploy UI or your favorite file editing tool. This page will walk through what to expect. ## Editing via the Octopus UI Editing via the Octopus Deploy UI works the same whether you are saving to a git repository or to SQL Server. You can add steps, update processes, and remove steps, just like before. When you enable version control on a project, you get additional functionality. ### Branch switcher The first difference is the addition of a branch-switcher. When editing the deployment process via the Octopus UI, the branch is selected in the branch-switcher at the top of page. :::figure ![Branch-switcher user-interface](/docs/img/projects/version-control/branch-switcher-ui.png) ::: You can only switch branches on a version-controlled page (Process, Variables, etc.). If you have set up Protected branches in the Version Control Settings, you will see a padlock 🔒 next to the relevant branch in the Branch switcher. :::figure ![branch-switcher protected-branches user-interface](/docs/img/projects/version-control/branch-switcher-protected-branches.png) ::: Each branch can have a different deployment process. For example, if you decide to move from running web applications on VMs to PaaS, you could create a branch to hold all your code changes and your deployment process changes. You can make the necessary updates to your deployment process to deploy to a PaaS target. The deployment process currently being used to deploy to **Production** can be left alone until you are ready to merge in both the code changes and the deployment process changes. ### Commits Before enabling version control on the project, clicking save updated a record in the SQL Server database. That record would be overwritten with each change. Once version control is enabled, that is no longer the case. You'll be generating commits each time you "save." As such, the save experience has been updated. The **Save** button has been replaced with **Commit**, and clicking on that will allow you to enter a commit message before saving. Next to the **Commit** button is a quick save, useful when you make a minor change. Below the **Commit** button, you can see the branch you are committing to. :::figure ![committing a change to version control](/docs/img/projects/version-control/commit-process.png) ::: ### Commits to protected branches If you are making changes on a protected branch, the quick save option will be disabled. When you click the **Commit** button, you will always be asked to Commit to a new branch. The option to commit to this branch will be disabled. :::figure ![committing a change on a protected branch](/docs/img/projects/version-control/commit-process-protected.png) ::: ### Viewing and editing OCL Enabling version control also enables you to edit the OCL (Octopus Configuration Language) file directly. We suggest using your favorite text editor or IDE to make changes, commit, and push them just as you would any other code change. :::div{.hint} We have a [Visual Studio Code Plug-in](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.vscode-octopusdeploy) that will add syntax highlighting, OCL snippets, and an integrated tree view for navigating nodes in an HCL file. ::: Any changes made to an OCL file via a text editor will not be reflected in the Octopus UI right away. The process for Octopus to reflect any changes is: 1. Commit the file. 1. Push any changes to the remote git repo. Octopus will periodically fetch from the remote, so you might have to wait a short time for the changes to appear. To see the changes immediately, simply reload the page. :::div{.hint} The Octopus Deploy Web Portal will only add non-default properties to the OCL files. For example, if a step isn't scoped to run for a specific environment(s), that property will not show up when you view the deployment process there. ::: ### OCL versus Octopus Terraform Provider While OCL is similar to HCL, it is not the same. In addition, there is not a 1:1 match between the resources generated for OCL and the resources for the [Octopus Terraform Provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest/docs). That means you cannot copy resources between OCL files and TF files. ## Version control features Storing the deployment process in the same repository as your source code has many benefits. But don't forget to take advantage of all the version control features, including: - Branching a deployment process when needing to make changes. - Leveraging Pull Requests for approvals to any changes. - Reverting changes if something doesn't work right. If you make any changes outside of the Octopus UI (merging a branch, reverting a change, etc.), you'll need to either wait for Octopus to fetch from the remote repo, or reload the page for those changes to be reflected in the Octopus Web Portal. # GitHub integration Source: https://octopus.com/docs/projects/version-control/github.md The Octopus Deploy GitHub App provides seamless integration between Octopus Deploy and GitHub. :::div{.hint} The Octopus Deploy GitHub App is only supported on Octopus Cloud instances. ::: To get started, go to the GitHub Connections page in the Deploy -> Manage section of your Octopus cloud instance, and follow the prompts. ## GitHub App Connections GitHub Connections is the recommended way to connect Octopus to your GitHub accounts (organizations or users). It provides seamless and secure connection via the Octopus GitHub App, without using personal access tokens. ### Connecting a GitHub account Before you can use an GitHub account in Octopus Deploy, you need to connect the account to the Space. :::figure ![Screenshot of Octopus Deploy GitHub Connections screen showing OctopusPetShop organization connected and OctopusDeploy organization not connected](/docs/img/api-and-integration/github/github-connections-screen.png) ::: To connect a new account, select any currently disconnected account to go to the new connection screen where you can select the repositories and complete the connection. You can only connect each GitHub account once per Space. Once connected, the account will show at the top of the list with a Connected label. If you don't see an account that you're expecting in this list, the app probably hasn't been installed (Octopus cannot see an account that have the app installed). To install the Octopus GitHub App in a new account, select the link at the bottom of the screen to go to GitHub and complete the installation process. ### Editing GitHub Connections When you first open the GitHub connection page, you will be in view mode. This will show the connection details and the currently connected repositories. To edit the connection, click the edit button at the top of the screen. This will put the connection in edit mode, and load the GitHub repositories that you are able to connect. You will not be able to save the connection unless you have at least 1 repository selected. To remove all repositories, disconnect the account completely using the Disconnect button in the overflow menu. :::figure ![Screenshot of Octopus Deploy GitHub Connections screen showing OctopusPetShop connection with overflow menu expanded showing disconnect button](/docs/img/api-and-integration/github/github-connection-disconnect.png) ::: ### Selecting repositories on the GitHub Connection Each GitHub Connection defines its own set of repositories (this is on top of the list of repositories configured on the installation in GitHub). GitHub accounts can only have a single GitHub App installation, so this installation is shared by all Octopus instances connected to that account. By requiring repositories are set for each connection as well, you are able to fine-tune the GitHub resources that each connection in each Space can access. If you ever add more repositories to the installation in GitHub, you can be confident that any existing connections cannot access this repository until you explicitly add it to those connections. This does add an extra step every time you want to add a new repository, but we believe this is worth it for the extra security this provides. #### If you can't see a repository Octopus can only see repositories that are available to the app installation and the current user. If you can't see a repository that you expect to see on this screen, it may not be accessible to either you or the installation. To configure more repositories on a connection, follow the link at the bottom of the repository selection screen to configure more repositories on GitHub. :::figure ![Screenshot of Octopus Deploy GitHub Connections screen for OctopusPetShow in edit mode showing PetShop and ProductAPI repositories selected and UserAPI deselected. Configure repository access in the Octopus Deploy app on GitHub link is shown at the bottom of the page](/docs/img/api-and-integration/github/github-connection-edit.png) ::: #### Only repository administrators can connect repositories To connect a repository, you must be an administrator of the repository on GitHub. If you're not an administrator (but can view the repository), you will still see the repository in the list, but will not be able to select it. ### Recovering a GitHub App connection Octopus can help you recover your GitHub App connection automatically if you lose the connection in the following cases: - Disconnecting the GitHub App - The registration is broken (for example, your instance URL was changed) - The app was uninstalled on GitHub - The app was suspended on GitHub Simply follow the on-screen prompts to reconnect the account and select the same repositories as before. ## Using GitHub App Connections You can currently use GitHub App Connections to connect to Configuration as Code projects. This removes the need for using Personal Access Tokens to connect to GitHub repositories, and allows users to commit as their GitHub users (rather than using a shared account). You can also define GitHub Connections in [Platform Hub](/docs/platform-hub). GitHub Connections defined in Platform Hub can only be used to configure Platform Hub's version control settings and can't be used in spaces. ## Requested Permissions There are specific GitHub permissions that the Octopus GitHub App requests in order to perform its tasks. - **Repository Permissions** - **Contents: Read and Write** Allows Octopus to access the files in the approved repositories for usage such as [Config As Code](https://octopus.com/docs/projects/version-control) projects, [Git Resources in deployments](https://octopus.com/blog/git-resources-in-deployments) or during some steps such as [Argo CD](https://octopus.com/docs/argo-cd). - **Metadata: Read-only** Default permission required by all GitHub Apps in to load basic repository information. - **Pull Requests: Read and Write** Used by Octopus when executing some steps, for example supporting pull requests for Argo CD deployments. - **Account Permissions** - **Email addresses (Read-only):** Required so that Octopus can attempt to obtain the correct email address used when committing the author information to a commit. Whenever possible, Octopus uses a token scoped down to minimal permissions in accordance with the principle of least privilege. ## GitHub Allow List The Octopus Deploy GitHub App can be used with the GitHub's allow list feature. To include the app in your allow list manually add the IP Address `172.182.208.68`. Information about adding IP addresses to GitHub's allow list can be found in [GitHub's Documentation](https://docs.github.com/en/enterprise-cloud@latest/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/managing-allowed-ip-addresses-for-your-organization#adding-an-allowed-ip-address) :::div{.hint} **Note:** In order to use Octopus Deploy with GitHub allow lists, the IP address of your Octopus Deploy instance and any workers that require GitHub access will also need to be added. If you are using an Octopus Cloud instance of Octopus Deploy you can obtain your [static IP](/docs/octopus-cloud/static-ip) via the Control Center. ::: Due to a limitation in the way that GitHub supports inheritance of IP addresses when performing actions on behalf of a user, the IP address for the GitHub App needs to be configured manually and cannot be inherited from the app settings. For more information please refer to [GitHub's Documentation](https://docs.github.com/en/enterprise-cloud@latest/apps/maintaining-github-apps/managing-allowed-ip-addresses-for-a-github-app#about-ip-address-allow-lists-for-github-apps) ## More information on installing and authorizing the Octopus GitHub App You install the Octopus GitHub App on an account (organization or user) to give the repositories or other content within that account. Authorizing gives the Octopus GitHub App permission to act on your behalf in any account that has the app installed. Installing and authorizing are both GitHub concepts. If you want to find out more about what installing and authorizing GitHub App and how to manage these installation and authorizations, refer to the GitHub documentation: - [GitHub Apps documentation](https://docs.github.com/en/apps/using-github-apps/about-using-github-apps) - [Installing GitHub apps documentation](https://docs.github.com/en/apps/using-github-apps/installing-a-github-app-from-a-third-party) - [Authorizing GitHub apps documentation](https://docs.github.com/en/apps/using-github-apps/authorizing-github-apps) ## Older versions - Prior to version 2024.3.12703 when the new UI navigation was introduced, the GitHub Connections page is located in the Library section of Octopus. # GitHub issue tracking integration Source: https://octopus.com/docs/releases/issue-tracking/github.md Octopus integrates with GitHub issues. The integration includes the ability to: - Automatically add links to GitHub issues from releases and deployments in Octopus. - Retrieve release notes from GitHub for automatic release note generation. ## How GitHub integration works :::figure ![Octopus GitHub integration - how it works diagram](/docs/img/releases/issue-tracking/images/octo-github-how-it-works.png) ::: 1. When you commit code, add a commit message containing one or more [GitHub issue references](#commit-messages). 2. The Octopus Deploy [plugin](/docs/packaging-applications/build-servers) for your build server [pushes the commits to Octopus](/docs/packaging-applications/build-servers/build-information/#passing-build-information-to-octopus). These are associated with a package ID and version (The package can be in the built-in Octopus repository or an external repository). 3. The GitHub issue-tracker extension in Octopus parses the commit messages and recognizes the issue references. 4. When creating the release which contains the package version, the issues are associated with the release. These are available for use in [release notes](/docs/packaging-applications/build-servers/build-information/#build-info-in-release-notes), and will be visible on [deployments](/docs/releases/deployment-changes). :::figure ![Octopus release with GitHub issues](/docs/img/releases/issue-tracking/images/octo-github-release-details.png) ::: ![Octopus deployment with generated release notes](/docs/img/releases/issue-tracking/images/octo-github-release-notes.png) ### Availability {#availability} The ability to push the build information to Octopus, which is required for GitHub integration, is currently only available in the official Octopus plugins: - [JetBrains TeamCity](https://plugins.jetbrains.com/plugin/9038-octopus-deploy-integration) - [Atlassian Bamboo](https://marketplace.atlassian.com/apps/1217235/octopus-deploy-bamboo-add-on?hosting=server&tab=overview) - [Azure DevOps](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) - [Jenkins Octopus Deploy Plugin](https://plugins.jenkins.io/octopusdeploy/) - [GitHub Actions](https://github.com/marketplace/actions/push-build-information-to-octopus-deploy) ## Configuring GitHub integration The following steps explain how to integrate Octopus with GitHub issues: 1. [Configure your build server to push build information to Octopus.](#configure-your-build-server) This is required to allow Octopus to know which issues are associated with a release. 2. [Configure the GitHub connection in Octopus Deploy.](#connect-octopus-to-github) ## Configure your build server to push build information to Octopus {#configure-your-build-server} To integrate with GitHub issues, Octopus needs to understand which issues are associated with a [release](/docs/releases). Octopus does this by inspecting commit messages associated with any packages contained in the release. To supply the commit messages: 1. Install one of our official [build server plugins](#availability) with support for our build information step. 2. Update your build process to add and configure the [Octopus Build Information step](/docs/packaging-applications/build-servers/build-information/#build-information-step). ## Connect Octopus to GitHub {#connect-octopus-to-github} 1. Configure the GitHub extension. In the Octopus Web Portal, navigate to **Configuration ➜ Settings ➜ GitHub Issue Tracker** and set the **GitHub Base URL**. This is required when resolving issue references that cross repository boundaries. For example, you might have a commit message with the following content: ``` Fix bug with X Resolves MyOrg/SomeOtherRepo#1234 ``` `MyOrg/SomeOtherRepo#1234` refers to issue \#1234 in the `SomeOtherRepo` repository belonging to the `MyOrg` organization. While not all that common, this syntax is used when issues are tracked in a separate repo to the commit that resolves the issue. Ensure the **Is Enabled** property is set as well. 2. Configure the Release Note Options (optional). - **Username/password**: Set these values to allow Octopus to connect to GitHub and retrieve issue (work item) details from _private repositories_ when viewing packages or creating releases. If these are not provided, just the raw work item references will be used as the work item link descriptions. If they are provided the work item's title will be used as the work item link's description. The password should be a personal access token, rather than an actual password. You can create a token in your GitHub account settings in the 'Developer settings' area. - **Release Note Prefix**: If specified, Octopus will look for a comment that starts with the given prefix text and use whatever text appears after the prefix as the release note, which will be available in the [build information](/docs/packaging-applications/build-servers/build-information) as the issue's description. If no comment is found with the prefix then Octopus will default back to using the title for that issue. For example, a prefix of `Release note:` can be used to identify a customer friendly issue title vs a technical feature or bug fix title. When configured, this integration will retrieve GitHub issue details and add details to your releases and deployments. ## Commit messages {#commit-messages} The parsing of the commit messages is based on the GitHub concepts around [closing issues using keywords](https://help.github.com/en/articles/closing-issues-using-keywords). The Octopus extension looks for these same keywords, and ignores issue references where the keywords are not also present. ## Learn more - [Build information](/docs/packaging-applications/build-servers/build-information). # Delete an AWS CloudFormation stack Source: https://octopus.com/docs/runbooks/runbook-examples/aws/delete-stack.md In addition to automating the creation of AWS resources, CloudFormation provides a simple method for deleting the resources it created as part of a stack. Using a runbook, you can automate tearing down environments when they're no longer needed. Octopus supports the deletion of an existing AWS CloudFormation stack through the **Delete an AWS CloudFormation stack** step. This step deletes a CloudFormation stack using AWS credentials managed by Octopus. The proceeding instructions can be followed to configure the Delete an AWS CloudFormation stack step. ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Choose the **Delete an AWS CloudFormation stack** step: :::figure ![Delete Stack](/docs/img/runbooks/runbook-examples/aws/images/deploy-cloudformation-step.png) ::: 5. Fill in the parameters for the step: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Region | The region your resources are located | us-west-1 | | CloudFormation stack name | Name of existing stack | MySuperStack | ### AWS section Select the variable that references the **Amazon Web Services Account** under the **AWS Account** section or choose to execute using a service role assigned to the EC2 instance. If you don't have an **AWS Account Variable** yet, check our [documentation on how to create one](/docs/projects/variables/aws-account-variables). :::figure ![AWS Account](/docs/img/runbooks/runbook-examples/aws/images/step-aws-account.png) ::: The supplied account can optionally be used to assume a different AWS service role. This can be used to run the AWS commands with a role that limits the services that can be affected. :::figure ![AWS Role](/docs/img/runbooks/runbook-examples/aws/images/step-aws-role.png) ::: :::div{.hint} If you select **Yes** to **Execute using the AWS service role for an EC2 instance**, you do not need an AWS account or account variable. Instead, the AWS service role for the EC2 instance executing the deployment will be used. See the [AWS documentation](https://oc.to/AwsDocsRolesTermsAndConcepts) for more information on service roles. ::: ### CloudFormation section Under the **CloudFormation** section, the AWS region and stack name need to be defined. :::div{.hint} If the stack does not exist, this step will succeed and not attempt to delete it again. ::: You can also optionally wait for the stack to be deleted completely before finishing the step by selecting the **Wait for completion** check-box. :::div{.hint} Unselecting the **Wait for completion** check-box will allow the step to complete once that CloudFormation deletion has been initiated. However, unselecting the option means that the step will not fail if the CloudFormation stack deletion fails. ::: :::figure ![AWS Region](/docs/img/runbooks/runbook-examples/aws/images/step-aws-region.png) ::: In a single step, you can delete all the resources created within a CloudFormation stack. ## Samples We have a [Target - PostgreSQL](https://oc.to/TargetPostgreSQLSampleSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `Space Infrastructure` project. # Manage DNS records in Azure Source: https://octopus.com/docs/runbooks/runbook-examples/azure/manage-dns.md [Azure DNS](https://docs.microsoft.com/en-us/azure/dns/dns-overview) is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. Using a runbook, Octopus makes it easy to manage DNS records hosted in Azure DNS. The next section shows how you can create runbooks to manage DNS records: - [Create DNS record runbook](#create-dns-record) - [Delete DNS record runbook](#delete-dns-record) - [Samples](#samples) ## Create DNS record runbook {#create-dns-record} 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add a **Run an Azure Script** step, and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Choose whether to use the bundled **Azure Tools** or the ones pre-installed on the worker. 1. Choose the **Azure Account** to use: 1. In the **Azure** section, select the variable that references the **Account**. If you don't have an **Azure Account** yet, check our [documentation on how to create one](/docs/infrastructure/accounts/azure). :::figure ![Azure Account variable](/docs/img/runbooks/runbook-examples/azure/images/azure-account-variable.png) ::: :::div{.hint} [Azure accounts](/docs/infrastructure/accounts/azure/) can be referenced in a project through a project [variable](/docs/projects/variables) of the type **Azure account**. The [Azure Run a Script](/docs/deployments/azure/running-azure-powershell) step will allow you to bind the account to an **Azure account** variable, using the [binding syntax](/docs/projects/variables/#use-variables-in-step-definitions). By using a variable for the account, you can have different accounts used across different environments or regions using [scoping](/docs/projects/variables/#use-variables-in-step-definitions). ::: 9. In the **Inline source code** section, add the following code as a **PowerShell** script: :::div{.hint} Note the use of Octopus project variables, you will need to make sure you create these for this example to work. You will also see use of an output variable for the IP address created in a step not shown here. ::: ```powershell # Provide the IPv4 address to use for the A record $IPAddress = $OctopusParameters["Octopus.Action[Get IPv4 Address].Output.IPAddress"] # The resource group associated with the DNS Zone $resourceGroup = $OctopusParameters["Global.Azure.DNS.ResourceGroup"] # The DNS Zone name in Azure $zoneName = $OctopusParameters["Global.Azure.DNS.Samples.ZoneName"] # The DNS A record name you wish to create $recordSetName = $OctopusParameters["Project.Azure.DNS.Name"] Write-Host "Checking for existing DNS A-record for $recordSetName" $existingRecordSetName = (az network dns record-set a list --resource-group=$resourceGroup --zone-name $zoneName --query "[?name=='$recordSetName'].name | [0]" -o json) if( -not ([string]::IsNullOrEmpty($existingRecordSetName))) { Write-Highlight "Skipping DNS creation as $recordSetName already exists" } else { Write-Highlight "Creating DNS A record for $recordSetName pointing at $IPAddress" az network dns record-set a add-record --resource-group $resourceGroup --zone-name $zoneName --record-set-name $recordSetName --ipv4-address $IPAddress } ``` The script will check to see if the DNS A record specified in the `Project.Azure.DNS.Name` variable exists. If it does, it skips creation of the record. If not, it will create the record using the Azure CLI `add-record` command. Configure any other settings for the step and click **Save**, and in just a few steps, we've created a runbook to automate the creation of a DNS A record hosted in Azure. ## Delete DNS record runbook {#delete-dns-record} 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add a **Run an Azure Script** step, and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. Choose whether to use the bundled **Azure Tools**, or the ones pre-installed on the worker. 1. Choose the **Azure Account** to use: 1. In the **Azure** section, select the variable that references the **Account**. If you don't have an **Azure Account Variable** yet, check our [documentation on how to create one](/docs/infrastructure/accounts/azure). 1. In the **Inline source code** section, add the following code as a **PowerShell** script: :::div{.hint} Note the use of Octopus project variables, you will need to make sure you create these for this example to work. You will also see use of an output variable for the IP address obtained in a step not shown here. ::: ```powershell # Provide the IPv4 address of the associated A record to be deleted. $IPAddress = $OctopusParameters["Octopus.Action[Get IPv4 Address].Output.IPAddress"] # The resource group associated with the DNS Zone $resourceGroup = $OctopusParameters["Global.Azure.DNS.ResourceGroup"] # The DNS Zone name in Azure $zoneName = $OctopusParameters["Global.Azure.DNS.Samples.ZoneName"] # The DNS A record search pattern to look for in Azure DNS to remove. $dnsTag = $OctopusParameters["Project.Azure.DNS.Tag"] Write-Host "Checking for existing DNS A-records for $dnsTag" $allDnsRecords = (az network dns record-set a list --resource-group=$resourceGroup --zone-name $zoneName --query "[*].name" -o json | ConvertFrom-Json) $matchingRecords = $allDnsRecords.Where{ $_ -like "$dnsTag*"} $recordCount = $allDnsRecords.Count $matchingCount = $matchingRecords.Count Write-Highlight "Found $matchingCount (out of $recordCount) DNS A-records matching $dnsTag" if($matchingCount -gt 0) { if(([string]::IsNullOrEmpty($IPAddress))) { Write-Warning "Skipping DNS deletion as IP Address is not set/found" } else { foreach($dnsRecord in $matchingRecords) { Write-Host "Deleting DNS Record: $dnsRecord pointing at $IPAddress" az network dns record-set a remove-record --resource-group $resourceGroup --zone-name $zoneName --record-set-name $dnsRecord --ipv4-address $IPAddress } } } else { Write-Highlight "Skipping DNS deletion as no records found" } Write-Host "Completed deletion of existing DNS for $dnsTag" ``` The script will check to see if the DNS A record specified in the `Project.Azure.DNS.Tag` variable exists. If it does, it will proceed to delete the record using the Azure CLI `remove-record` command. If it doesn't, it skips deleting any records. Configure any other settings for the step and click **Save**, and in just a few steps, we've created a runbook to automate the deletion of a DNS A record hosted in Azure. ## Samples We have a [Pattern - Rolling](https://oc.to/PatternRollingSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at these examples in the `PetClinic Infrastructure` project: - The Create DNS record step is located in the `Spin up GCP PetClinic Project Infrastructure` runbook. - The Delete DNS record step is located in the `Destroy the GCP Kraken` runbook. # Database examples Source: https://octopus.com/docs/runbooks/runbook-examples/databases.md Octopus is great for automating your database deployments, but databases also need routine maintenance, and Runbooks can be used to automate this without creating new deployment releases. ## Learn more Typical database tasks could include: - [Backup SQL database](/docs/runbooks/runbook-examples/databases/backup-mssql-database) - [Restore SQL database](/docs/runbooks/runbook-examples/databases/restore-mssql-database) - [Restore SQL database to another environment](/docs/runbooks/runbook-examples/databases/restore-mssql-database-to-environment) - [Create MySQL database](/docs/runbooks/runbook-examples/databases/create-mysql-database) - [Create PaaS MySQL database server](/docs/runbooks/runbook-examples/databases/create-mysql-paas-server) - [Backup MySQL database](/docs/runbooks/runbook-examples/databases/backup-mysql-database) - [Backup RDS SQL database to S3](/docs/runbooks/runbook-examples/databases/backup-rds-mssql-s3-database) - [Restore RDS SQL database from S3](/docs/runbooks/runbook-examples/databases/restore-rds-mssql-s3-database) # Restore SQL database to another environment Source: https://octopus.com/docs/runbooks/runbook-examples/databases/restore-mssql-database-to-environment.md To restore a SQL database with a runbook see [restore SQL database](/docs/runbooks/runbook-examples/databases/restore-mssql-database). This section shows you how to restore a database to a different environment, for instance restoring from production down to test. Using a runbook, you can create a self-service method for developers to restore the production database to a lower level environment to test bugs, fixes, and even the deployment process itself. Using the runbook means developers don't need any extra permissions to the database server itself, eliminating the time normal spent filling out a support ticket or tracking down a DBA to perform the restore. ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a new step template from the community library called **SQL - Restore Database**. 5. Fill out all the parameters in the step. We recommend using [variables](/docs/projects/variables) rather than entering the values directly in the step parameters. | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Server | Name database server | SQLserver1 | | Database | Name of the database to restore | MyDatabase | | Backup Directory | Location of where the backup file resides | `\\\mybackupserver\backupfolder` | | SQL login | Name of the SQL Account to use (leave blank for Integrated Authentication) | MySqlLogin | | SQL password | Password for the SQL Account | MyPassword | | Compression Option | Use compression for this backup | Enabled | | Devices | The number of backup devices to use for the backup | 1 | | Backup file suffix | Specify a suffix to add to the backup file names. If left blank, the current date, in the format given by the DateFormat parameter, is used | ProdRestore | | Separator | Separator used between database name and suffix | _ | | Date Format | Date format to use if backup is suffixed with a date stamp (e.g. yyyy-MM-dd) | yyyy-MM-dd | 6. Add a new step template from the community library called **SQL - Fix Orphaned User**. This is needed because the SID associated with the login for the database will be different and needs to be re-associated. 7. Fill out all the parameters in the step. | Parameter | Description | Example | | ------------- | ------------- | ------------- | | SQL Server | Name of the server | SQLserver1 | | SQL Login | Name of the SQL Account to use (leave blank for Integrated Authentication) | MySqlLogin | | SQL Password | Password for the SQL Account | MyPassword | | Database Name | Name of the database for the account | MyDatabase | | SQL Login | Name of the account to be fixed | MyOrphanedAccount | After adding all required parameters, click **Save**, and you have a runbook to restore your SQL database to another environment and fix the orphaned user accounts. You can also add additional steps to add security to your runbooks, such as a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step for business approvals. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. ## Learn more - [SQL Backup - Community Step template](https://library.octopus.com/step-templates/34b4fa10-329f-4c50-ab7c-d6b047264b83/actiontemplate-sql-backup-database) - [SQL Fix Orphaned User - Community Step Template](https://library.octopus.com/step-templates/e56e9b28-1cf2-4646-af70-93e31bcdb86b/actiontemplate-sql-fix-orphaned-user) # Restart server runbook in Octopus Source: https://octopus.com/docs/runbooks/runbook-examples/emergency/restart-server.md Restarting servers is typically the responsibility of the Operations team as restarting a machine requires elevated permissions. With a runbook, you can give developers the ability to restart a machine whenever they need. The auditing feature of Octopus Deploy records the identity of the person who initiated the run of the runbook so you can always see who did what and when. Unlike most other runbooks, this type of operation needs to run on a [worker](/docs/infrastructure/workers) machine instead of the machine that needs to be restarted. This is to ensure communication to the Tentacle isn't interrupted, as it would be when the machine restarts, which would result in a failed run of the runbook. ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a **Run a script** step 5. Change the Execution Location to `Run on a worker on behalf of each deployment target` 6. Select the role from the `On Targets in Roles` drop-down list. 7. Select the radio button that corresponds with the language you're using and enter the inline source code:
PowerShell ```powershell Invoke-Command -ScriptBlock { Restart-Computer } -ComputerName #{Octopus.Machine.Name} ```
Bash ```bash ssh #{Octopus.Machine.Name} sudo reboot ```
# Install software with Chocolatey Source: https://octopus.com/docs/runbooks/runbook-examples/routine/installing-software-chocolatey.md [Chocolately](https://chocolatey.org/) is a popular package manager for Windows. It allows you to automate the installation of software used by the machines where you deploy your software, for example, systems running [.NET](https://dotnet.microsoft.com/). With Runbooks, you can create a runbook as part of a routine operations task to install software via Chocolatey that are required for your [deployment targets](/docs/infrastructure/deployment-targets/tentacle/windows/) or [Workers](/docs/infrastructure/workers). ## Create the runbook To create a runbook to install software with Chocolatey: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a name and click **SAVE**. Next, you need to ensure Chocolatey is installed. ### Install chocolatey Before you can use Chocolatey, it must be installed. To do this, you can use an existing step template from our [community library](/docs/projects/community-step-templates) called [Chocolatey - Ensure Installed](https://library.octopus.com/step-templates/c364b0a5-a0b7-48f8-a1a4-35e9f54a82d3/actiontemplate-chocolatey-ensure-installed). To add this step to a runbook: 1. Add the community step template called **Chocolatey - Ensure Installed**, and give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. *Optionally*, configure any [conditions](/docs/projects/steps/conditions) for the step, and click **Save**. You can now use this step in conjunction with other runbook steps to install your software with Chocolatey. ## Common packages There are plenty of different types of software you can install with Chocolatey. The next few sections outline some of the common ones you can install with a runbook using the [Run a script](/docs/deployments/custom-scripts/run-a-script-step) step. ### Test for installed chocolatey package A helper PowerShell function called `Test-ChocolateyPackageInstalled` is used by the examples to check if the package to be installed is already present on the target machine: ```ps function Test-ChocolateyPackageInstalled { Param ( [ValidateNotNullOrEmpty()] [string] $Package ) Process { if (Test-Path -Path $env:ChocolateyInstall) { $packageInstalled = Test-Path -Path $env:ChocolateyInstall\lib\$Package } else { Write-Host "Can't find a chocolatey install directory..." } return $packageInstalled } } ``` The script checks to see if the package specified in `$Package` is already installed by testing the path specified in `$env:ChocolateyInstall\lib\$Package`, and returning the result in `$packageInstalled`. ### .NET Framework There are different versions of the .NET Framework you can install using Chocolatey. This example will use the `dotnetfx` package from Chocolatey. To add this to a runbook: 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```ps # function Test-ChocolateyPackageInstalled omitted here. $package = "dotnetfx" if (Test-ChocolateyPackageInstalled -Package $package) { Write-Host "$package is already installed" } else { choco install $package -confirm } ``` The script will run the `choco install` command if the `dotnetfx` package isn't already installed. ### .NET Core To install .NET Core, we use the `dotnetcore` Chocolatey package. To add this to a runbook: 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```ps # function Test-ChocolateyPackageInstalled omitted here. $package = "dotnetcore" if (Test-ChocolateyPackageInstalled -Package $package) { Write-Host "$package is already installed" } else { choco install $package -confirm } ``` The script will run the `choco install` command if the `dotnetcore` package isn't already installed. Configure any other settings for the step and click **Save**, and you have a runbook step to install .NET Core. ### Windows features Chocolatey can also be used to install Windows features by leveraging [DISM, or Deployment Imaging Service Management](https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/what-is-dism). :::div{.hint} To find out what features are available to install on the machine, you can run the command: ``` Dism /online /Get-Features ``` ::: The command to install DISM features through Chocolatey is: ``` choco install [Feature Name] /y /source windowsfeatures ``` Where `[Feature Name]` is the name of the Windows feature you wish to install. To add this to a runbook step to install multiple features: 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, add the following code as a **PowerShell** script: ```ps # Octopus Variable with a comma separated list of features to install $dismAppList = "#{Project.Chocolatey.DISM.RequiredFeatures}" if ([string]::IsNullOrWhiteSpace($dismAppList) -eq $false){ Write-Host "DISM Features Specified" $appsToInstall = $dismAppList -split "," | foreach { "$($_.Trim())" } foreach ($app in $appsToInstall) { Write-Host "Installing $app" & choco install $app /y /source windowsfeatures | Write-Output } } ``` 5. Add a project [variable](/docs/projects/variables) called `Project.Chocolatey.DISM.RequiredFeatures` and include the features you wish to install. For example the following variable will install three Windows features: :::figure ![Chocolately DISM variable](/docs/img/runbooks/runbook-examples/routine/images/install-chocolatey-dism-variable.png) ::: The features which will be installed are: - IIS-WindowsAuthentication - NetFx4Extended-ASPNET45 - IIS-Security Configure any other settings for the step and click **Save**, and you have a runbook step to install Windows features. ## Automating Tentacle installation with chocolatey packages The Tentacle agent can be automatically installed from the command-line. This is very useful if you're deploying to a large number of servers or you're provisioning servers automatically. In addition, it's also possible to automate the installation of chocolatey packages at the same time as the Tentacle installation. We have a number of bootstrap scripts available in our OctopusSamples [Infrastructure as Code (IaC)](https://github.com/OctopusSamples/IaC/) GitHub repository. The following scripts are available to support Chocolatey package installation as part of a Tentacle installation: - [BootstrapTentacleAndRunChoco.ps1](https://github.com/OctopusSamples/IaC/blob/master/azure/bootstrap/BootstrapTentacleAndRunChoco.ps1) - This script installs a [Listening Tentacle](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended) and will install any Chocolately packages specified in the `$chocolateyAppList` parameter. - [BootstrapTentacleAndRunChocoPolling.ps1](https://github.com/OctopusSamples/IaC/blob/master/azure/bootstrap/BootstrapTentacleAndRunChocoPolling.ps1) - This script installs a [Polling Tentacle](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) and will install any Chocolately packages specified in the `$chocolateyAppList` parameter. These scripts support both standard Chocolatey packages and ones sourced through DISM. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFX` project. ## Learn more - [Automating developer machine setup with Chocolatey blog post](https://octopus.com/blog/automate-developer-machine-setup-with-chocolatey). # Runbooks publishing Source: https://octopus.com/docs/runbooks/runbook-publishing.md :::div{.success} Config-as-code runbooks use branches instead of publishing. If your project uses config-as-code runbooks, read about [managing runbooks permissions using branches](/docs/runbooks/config-as-code-runbooks#permissions-by-branch) instead. ::: Runbooks and deployments define their processes in exactly the same way. However, where a deployment has a [release](/docs/releases), a runbook has what is called a Snapshot. ## Snapshots For a given runbook, you can have two snapshots: - Draft - Published Similar to releases, the version of any packages that are used in the runbook are also snapshotted. This means if a newer version of the package is uploaded, and you wish to use it in your runbook, you need to create a new snapshot of the runbook. ## Draft snapshot A draft snapshot of a runbook is exactly what it sounds like, a draft version of the currently published version. Drafts are meant to give you a place to work and save a runbook that is a work in progress or has not yet been approved for general use. :::div{.hint} Draft snapshots can't be used to create a [scheduled runbook trigger](/docs/runbooks/scheduled-runbook-trigger), only published snapshots can. ::: ## Published snapshot The concept of a published snapshot is designed to help avoid confusion when selecting a version of the runbook you're supposed to run if you're not the author. You can think of it as the "Production" ready version of the runbook, which has been approved for general use. Publishing makes a runbook available to scheduled triggers and consumers (anyone with an appropriately scoped `RunbookRunCreate` permission, but without the `RunbookEdit` permission). Triggers and consumers will always execute the published snapshot. The published snapshot contains the process, variables, and packages. This allows editing and testing the runbook without impacting the published version. ### Publishing a snapshot To publish a snapshot, click the publish button on the task page after executing a runbook, or click publish on the runbook's process page. Publish from completed task: :::figure ![Publish runbook from task page](/docs/img/runbooks/runbook-publishing/runbook-publish-task.png) ::: Publish from process: :::figure ![Publish runbook from process page](/docs/img/runbooks/runbook-publishing/runbook-publish-process.png) ::: When a producer (anyone with an appropriately scoped `RunbookEdit` permission) executes a runbook, they will have the option between executing the published version or the current draft. Running the current draft allows testing changes before publishing. The latest version of the process and variables will be used and package versions will be prompted for. ![Run current draft](/docs/img/runbooks/runbook-publishing/runbook-run-draft.png) # Troubleshooting Active Directory integration Source: https://octopus.com/docs/security/authentication/active-directory/troubleshooting-active-directory-integration.md It's common for companies to integrate Octopus with Active Directory to manage their users and teams. Active Directory is very flexible and can have complex configurations, so we've put together this troubleshooting guide to help people troubleshoot and resolve authentication issues. :::div{.hint} This information is provided as a guide to help teams troubleshoot Octopus authentication issues with Active Directory. This, combined with a solid working knowledge of your infrastructure and some perseverance, should help resolve most issues. ::: - Configuring Active Directory users - Verifying configuration values - Logging - Read-Only Domain Controllers are not supported - Run as a different user not working Octopus integrates with Active Directory to authenticate users as well as authorize what actions they can perform. Our [Active Directory authentication](/docs/security/authentication/active-directory) page provides more information on how to set up Octopus to work with Active Directory and some details on how it's technically implemented. Essentially, Octopus interacts with Active Directory in two ways: 1. First, we authenticate a users' credentials are valid by invoking the Windows API `LogonUser()` function. 2. If that is successful, Octopus will then query Active Directory for information about the user. In this second interaction, we retrieve the groups a user is a member of and use them to determine what teams they belong to. :::div{.hint} **Teams are not Distribution Groups** While you might have a team that you would think maps to a Distribution Group, this does not mean that [subscriptions](/docs/administration/managing-infrastructure/subscriptions) will send emails to the DG email address configured in Active Directory. Teams in Octopus are more synonymous with Security Groups and are used to determine accessibility. To send subscription emails to a Distribution Group, email address will require setting up a user with that email address and assigning them to the appropriate Octopus team. ::: ## How Active Directory authentication works Before troubleshooting Active Directory within Octopus Deploy, it is critical to understand how that integration works. Octopus Deploy uses .dlls provided by Microsoft to interact with Active Directory. Specifically: - System.DirectoryServices.AccountManagement - System.DirectoryServices - System.DirectoryServices.ActiveDirectory The code will use the method `LoginUser()` to authenticate the user's credentials. Assuming the login is successful, Octopus Deploy will create [System.DirectoryServices.AccountManagement.UserPrincipal](https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.accountmanagement.userprincipal) object to query group membership. Group membership query in this order of operations: 1. First call [GetAuthorizationGroups](https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.accountmanagement.userprincipal.getauthorizationgroups) as that does a recursive search and returns security groups only. 2. If `GetAuthorizationGroups()` fails (for a variety of reasons), then run [GetGroups](https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.accountmanagement.principal.getgroups). The downside of `GetGroups()` is it only returns groups a user is a direct member of and includes distribution groups. Octopus Deploy ignores distribution groups. When a cross-domain trust is configured, both `GetAuthorizationGroups()` and `GetGroups()` methods will include groups in the trusted domains of the user. Octopus Deploy relies on what those methods return to determine group membership. We've found the vast majority of the time; Active Directory issues are a misconfiguration within Active Directory itself. We've provided scripts below where you can take Octopus Deploy out of the equation and test your configuration directly. ## Configuring Active Directory users Octopus relies on Active Directory users being configured with enough information to distinguish them. We recommend making sure each Active Directory user you want to use with Octopus has been configured with: 1. samAccountName (pre-Windows 2000 Logon Name) 2. UPN (User Principal Name) 3. Email Address :::figure ![Active Directory user properties showing the Account tab](/docs/img/security/authentication/active-directory/images/5866202.png) ::: ![Active Directory user properties showing additional fields](/docs/img/security/authentication/active-directory/images/5866203.png) These values can be used by Octopus to uniquely identify which Octopus User Account should be associated with each Active Directory User. ## Verifying configuration values Most errors we've seen are due to a lack of permissions or various active directory configuration issues. Additionally, errors are generally found when trying to retrieve a user's groups. The following are some examples. - `System.Runtime.InteropServices.COMException (0x8007054B): The specified domain either does not exist or could not be contacted.` - `System.DirectoryServices.ActiveDirectory.ActiveDirectoryServerDownException: The server is not operational.` The best way we've found to troubleshoot Active Directory issues is by taking Octopus Deploy out of the equation and running the PowerShell script below. This script duplicates the exact logic we use to retrieve a user's groups from Active Directory. The benefit of this script is that you can try different settings and get immediate feedback, whereas it's much slower and disruptive to do the same with the Octopus Server service. :::div{.hint} Run the scripts on the same VM hosting Octopus Deploy. If you are running the Octopus Deploy Windows Service as a specific Active Directory account, then run those scripts as that account. Running the script on your work computer under your account can result in inaccurate results. ::: ```powershell [System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.AccountManagement") [System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices") [System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.ActiveDirectory") # Only uncomment the remainder of this line if Octopus is scoped to a specific container. $principalContext = new-object -TypeName System.DirectoryServices.AccountManagement.PrincipalContext "Domain", "acme.local"#, "CN=Users, DC=acme, DC=local" $principal = [System.DirectoryServices.AccountManagement.UserPrincipal]::FindByIdentity($principalContext, "ExampleUser") # Get Authorized Users Groups. This reads inherited groups but fails in some situations based on security and configuration $groups = $principal.GetAuthorizationGroups() Write-Output $groups # Try Number two. Reads just the groups they are a member of - more reliable but not ideal $groups = $principal.GetGroups() Write-Output $groups $principalContext.Dispose() ``` Notes: - Ensure you replace the domain name ``acme.local`` with the appropriate value for you network. - Ensure you replace the domain username ``ExampleUser`` with a sample Octopus username who would normally log into the system. - It's recommended that you run this script as the same user you're running the Octopus service under and on the same server so it reproduces the problem accurately. If specifying a container: - Ensure you replace the active directory container string ``CN=Users, DC=acme, DC=local`` with the appropriate value for your network. If you're not sure of this value, we suggest talking to your network team (active directory expert) or trying different values and testing it with the script. For additional help on building/finding your container string, this StackOverflow answer is excellent. [http://serverfault.com/a/130556](http://serverfault.com/a/130556) See the following documentation page for further information on configuring Octopus to use a [specific Active Directory container](/docs/security/authentication/active-directory/custom-containers-for-ad-authentication). Similarly, the following script duplicates the logic we use to search for groups (when you're trying to find one to add to a Team). ```powershell [System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.AccountManagement") [System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices") [System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.ActiveDirectory") $principalContext = new-object -TypeName System.DirectoryServices.AccountManagement.PrincipalContext "Domain", "acme" $principal = new-object -TypeName System.DirectoryServices.AccountManagement.GroupPrincipal $principalContext $principal.Name = "SomeGroup*" $searcher = new-object -TypeName System.DirectoryServices.AccountManagement.PrincipalSearcher $searcher.QueryFilter = $principal $groups = $searcher.FindAll().GetEnumerator() foreach ($group in $groups) { Write-Output $group } $principalContext.Dispose() ``` Notes: - Ensure you replace the domain name ``acme`` with the domain to be searched on your network. This may be the current domain or any trusted domain. - Ensure you replace sample partial group name ``SomeGroup`` with text that matches the start of a group name in the domain. - Per previous example script, it's recommended that you run this script as the same user you're running the Octopus service under :::div{.hint} Octopus only uses Security Groups for controlling access permissions. When searching for groups to add to an Octopus team, Distribution Groups will be filtered out. ::: ## Logging If problems persist, we suggest turning on active directory diagnostic logging and then executing the PowerShell script above to test changes based on the results. We've found the best way to get actionable details out of the logs is to set the following registry settings on the server running active directory domain services (i.e. your relevant domain controller). :::div{.problem} It's recommended that you backup any registry entries before making changes. ::: ```text Path: HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\15 Field Engineering Type: DWORD Value: 5 Path: HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Expensive Search Results Threshold Type: DWORD Value: 1 ``` Full credit to this StackOverflow answer for the tip. [http://serverfault.com/a/454362](http://serverfault.com/a/454362) For more information on diagnostic logging, see the following Microsoft TechNet article. Note that we're setting the 'Field Engineering' registry entry mentioned in this article. [https://technet.microsoft.com/en-us/library/cc961809.aspx](https://technet.microsoft.com/en-us/library/cc961809.aspx) The diagnostic logs can be viewed in the Event Viewer. :::figure ![Event Viewer showing Active Directory diagnostic logs](/docs/img/security/authentication/active-directory/images/5865632.png) ::: :::div{.hint} Remember to reset the registry values once you're finished troubleshooting. ::: ## Read-only domain controllers are not supported Read-only Domain Controllers are not currently supported by Octopus. The .NET API we're using ignores read-only DCs. If there are any development teams willing to investigate RODCs further, our [AD/Directory Services authentication provider](https://github.com/OctopusDeploy/DirectoryServicesAuthenticationProvider) is open source (if you are using **Octopus 3.5**+), so please feel free to checkout the current implementation if you wish to "roll your own" AD provider that includes support for RODCs and share with the Octopus community. ​:smiley:​ ## Run as a different user not working If you are signed into your Windows AD account and wish to sign in as a different AD user to Octopus, you need to do so via forms-based authentication and login with a fully qualified domain username (*eg. domain\user*). You **cannot** right-click and launch your browser as a different AD user. If forms-based authentication has previously been disabled, you may re-enable it again by running the following command: ```powershell Octopus.Server.exe service --stop Octopus.Server.exe configure --allowFormsAuthenticationForDomainUsers=true Octopus.Server.exe service --start ``` ## Domain Groups not loading across multiple domains In scenarios where you have to cross domain boundaries, issues can easily arise due to service account permissions. One such issue can occur when you have users who are members of groups from multiple domains. In this scenario, you may find that Octopus can only determine the groups in the same domain as the user itself and as such the user won't be treated as though they are in all the correct teams. The cause of this relates to the permissions for the user the Octopus Server is running as. Specifically, it is missing the "Read member of" permission in the domain(s) of the groups it isn't able to retrieve. This can include the domain the service account itself is in (e.g. Domain Users don't get "Read member of" by default). To resolve this issue, open Active Directory Administrative Center for the domains in question and add the permission for the service account. Exactly how that permission should be assigned is a design question whose answer will be specific to your environment, e.g. maybe you assign it directly to the service account, maybe you add it to a specific group containing the service account user because other users also need that permission, maybe you have a different standard pattern in your organization. ## Integrated authentication across domains not working {#Integrated} Octopus Server `2020.1.x` has a known issue with users signing in across domains. The underlying cause relates to server moving from .NET Framework (HttpListener) to .NET Core (HttpSys). For more information about the issue, see this [GitHub issue](https://github.com/OctopusDeploy/Issues/issues/6265). For configuration guidelines and troubleshooting integrated authentication, see our [Active Directory authentication](/docs/security/authentication/active-directory) guide. For users on a different domain to the domain the Octopus Server is a member of, the workaround is to use forms authentication instead of the `Sign in with a domain account` button. As of `2020.1.7` the server will detect this issue when users attempt to sign in across domains, and it will provide guidance to those users who are impacted. ## Sign in with a domain account fails with no clear error When using HTTP.sys (Kernel Mode), certain server-side errors may not be surfaced in the response. Instead, you may see a generic 500 error or the message: > An error occurred with Windows authentication, possibly due to a known issue, please try using forms authentication. This can make the root cause difficult to diagnose. A useful diagnostic step is to temporarily switch to Kestrel (User Mode), which surfaces the full error response: ```bash Octopus.Server.exe service --stop Octopus.Server.exe configure --webServer=Kestrel Octopus.Server.exe service --start ``` Reproduce the sign-in failure and note the error message returned. Once you have identified and resolved the underlying issue, you can switch back to HTTP.sys if desired. See [GitHub issue #9835](https://github.com/OctopusDeploy/Issues/issues/9835) for more detail. ## Maximum Session Duration breaks Active Directory SSO Setting the **Maximum Session Duration** to a low value (for example, `3600` seconds) can prevent users from signing in via the **Sign in with a domain account** button. The underlying error is: > Expiration cannot exceed maximum session duration When using HTTP.sys, this error is not surfaced directly (see the section above), making it particularly difficult to diagnose. To resolve this, increase the Maximum Session Duration in **Configuration ➜ Settings ➜ Server** to a value greater than or equal to the session expiry configured in your environment. See [GitHub issue #9836](https://github.com/OctopusDeploy/Issues/issues/9836) for more detail. # Octopus ID authentication Source: https://octopus.com/docs/security/authentication/octopusid-authentication.md :::div{.hint} Octopus ID authentication is only available in [Octopus Cloud](/docs/octopus-cloud). ::: Octopus ID authentication allows you to log in to your Octopus Cloud instance using the same account that you use to sign in at [Octopus.com](https://Octopus.com). This allows you to manage who is able to access your Octopus instance from [your organization](https://Octopus.com/organization/) and saves you time when moving between our website and your instance. ## Inviting users and configuring teams with Octopus ID After you've used Octopus.com to [invite some other users](/docs/octopus-cloud/#invite-users-via-control-center) to your instance, you can configure the users with [Teams](/docs/security/users-and-teams/) and [User Roles](/docs/security/users-and-teams/user-roles) as you normally would using the product. ## Supported authentication providers Octopus ID allows you to sign in using the following external authentication providers: - Google - Microsoft Azure Active Directory (AAD) - GitHub :::div{.hint} Octopus ID does not currently support configuring [external groups and roles](/docs/security/users-and-teams/external-groups-and-roles) using any of the authentication providers listed. ::: ### Learn more - [Invite users via Octopus.com](/docs/octopus-cloud/#invite-users-via-control-center) - [Octopus Cloud specific permissions](/docs/octopus-cloud/permissions) - [Octopus Cloud documentation](/docs/octopus-cloud) # HTTP Security Headers Source: https://octopus.com/docs/security/http-security-headers.md ## Octopus Web Portal The Octopus Web Portal supports a number of security related browser headers, designed to limit the attack surface area by locking down what browsers are able to do. This page describes what headers are available, whether they are configurable, how to configure them, and when they were first available. ### Server The `Server` browser header is set to `Octopus Deploy/ Microsoft-HTTPAPI/2.0`. This setting is not configurable. ### Access-Control-Allow-* (CORS) The Cross Origin Resource Security (CORS) headers are used to instruct browsers to allow/disallow requests from other websites to access the Octopus portal. By default, it is disabled, preventing any access. To modify this setting, you can use the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### Cache-Control The `Cache-Control` header configures how responses are cached both by intermediate proxies and by the user's browser. The Octopus portal does not send the `Cache-Control` header on the application (`/app`) endpoint as there is no sensitive data contained in, nor transferred through this mechanism. However, all API requests (the `/api` endpoint) do send the header with a value of `no-cache, no-store`, requesting that the response is never stored or written to disk. The dashboard has in-memory only caching (to increase performance), which can be disabled in **Configuration ➜ Features ➜ Browser Caching**. The header itself is not configurable on either `/app` or `/api` endpoints. ### X-XSS-Protection This header instructs browsers to enable their inbuilt Cross Site Scripting (XSS) filters. These are not foolproof filters, but can help prevent some forms of XSS attacks. The Octopus Server sets this header to `1; block`, enabling the filters and instructing the browser to block (rather than sanitize) any detected attack. This setting is not user configurable. ### X-Frame-Options Instructs browsers whether to allow the Octopus portal to be hosted in a frame. This is set to `DENY` by default, but can be configured via the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### X-Content-Type-Options This header is used to disable the MIME type "sniffing" capability which can allow non-executable MIME types to be interpreted as executable MIME types. This is set to `nosniff`, and is not user configurable. ### Strict-Transport-Security (HSTS) The `Strict-Transport-Security` header is used to instruct browsers that all future requests (for a specified amount of time) are sent over HTTPS, even if the user types `http://` into the browser address bar. This is not enabled by default, but as it can cause issues if implemented incorrectly, please read [our HSTS documentation](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https/#hsts) before implementing. ### Referrer-Policy This header instructs browsers on how much information to share, and with whom, when navigating between pages. This is enabled by default, and set to `no-referrer`. The value of this header can be modified using the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### Content-Security-Policy (CSP) The `Content-Security-Policy` header defines the list of browser features required by the Octopus portal and the allow list of domains which Octopus uses. This is used to limit the attack surface area for XSS and data injection attacks. This is enabled by default, and set to the tightest policy that allows full functionality. This can be disabled via the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### Public-Key-Pins (PKP) This header is used to define the allowed certificate fingerprints used for the TLS certificates for the site. The Octopus portal does not support the `Public-Key-Pins` header. Note that an [intention to deprecate and remove from chromium](https://groups.google.com/a/chromium.org/d/msg/blink-dev/he9tr7p3rZ8/eNMwKPmUBAAJ) has been announced. ### Expect_CT The `Expect_CT` header is used to instruct browsers to only accept connections with valid Signed Certificate Timestamps. The Octopus portal does not support this header. ## Octopus Server communications port The Octopus Server listens on a port (usually 10943) for connections from polling Tentacles. It uses a [custom communications protocol](/docs/security/octopus-tentacle-communication) with self-signed certificates, and shows a diagnostics page when accessed via a web browser. While there is limited scope for attack on this page, as some security scanning tools can report errors on this page, the following headers are supported on this port since **Octopus 3.17.13**. ### Content-Security-Policy (CSP) The `Content-Security-Policy` header defines the list of browser features required by the Octopus portal and the allow list of domains which Octopus uses. This is used to limit the attack surface area for XSS and data injection attacks. This is enabled by default, and set to the tightest policy that allows full functionality. This can be disabled via the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### Referrer-Policy This header instructs browsers on how much information to share, and with whom, when navigating between pages. This is enabled by default, and set to `no-referrer`. The value of this header can be modified using the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### X-Content-Type-Options This header is used to disable the MIME type "sniffing" capability which can allow non-executable MIME types to be interpreted as executable MIME types. This is set to `nosniff`, and is not user configurable. ### X-Frame-Options Instructs browsers whether to allow the Octopus portal to be hosted in a frame. This is set to `DENY` by default, but can be configured via the Octopus Server [configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure) command. ### X-XSS-Protection This header instructs browsers to enable their inbuilt Cross Site Scripting (XSS) filters. These are not foolproof filters, but can help prevent some forms of XSS attacks. The Octopus Server sets this header to `1; block`, enabling the filters and instructing the browser to block (rather than sanitize) any detected attack. This setting is not user configurable. ## Octopus Tentacle communications port The Octopus Tentacle listens on a port (usually 10933) for connections from the Octopus Server. It uses a [custom communications protocol](/docs/security/octopus-tentacle-communication) with self-signed certificates, and shows a diagnostics page when accessed via a web browser. While there is limited scope for attack on this page, as some security scanning tools can report errors on this page, the following headers are supported on this port since **Tentacle 3.16.1**. ### Content-Security-Policy (CSP) The `Content-Security-Policy` header defines the list of browser features required by the page. This is used to limit the attack surface area for XSS and data injection attacks. This is set to the tightest policy that allows full functionality. This is not user configurable. ### Referrer-Policy This header instructs browsers on how much information to share, and with whom, when navigating between pages. This is enabled by default, and set to `no-referrer`, and is not user configurable. ### X-Content-Type-Options This header is used to disable the MIME type "sniffing" capability which can allow non-executable MIME types to be interpreted as executable MIME types. This is set to `nosniff`, and is not user configurable. ### X-Frame-Options Instructs browsers whether to allow the diagnostics page to be hosted in a frame. This is set to `DENY` by default, and is not user configurable. ### X-XSS-Protection This header instructs browsers to enable their inbuilt Cross Site Scripting (XSS) filters. These are not foolproof filters, but can help prevent some forms of XSS attacks. The Octopus Server sets this header to `1; block`, enabling the filters and instructing the browser to block (rather than sanitize) any detected attack. This setting is not user configurable. # Call the Jenkins REST API from PowerShell Source: https://octopus.com/docs/support/call-jenkins-rest-api-from-powershell.md Although the typical deployment workflow sees a CI system like Jenkins triggering a deployment in Octopus, it is sometimes useful to have the reverse where Octopus trigger builds in Jenkins. This page looks at how you can trigger a Jenkins deployment using its REST API and PowerShell. ## Jenkins CSRF security Jenkins has a security feature to prevent [Cross Site Request Forgery](https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/client-and-managed-controllers/csrf-protection-explained) attacks, which is found under **Jenkins ➜ Manage Jenkins ➜ Configure Global Security ➜ Prevent Cross Site Request Forgery exploits**. :::figure ![](/docs/img/support/images/csrf.png) ::: In practical terms this means that each request to the Jenkins API needs to have what is known as a crumb defined in the headers. To generate this crumb, we need to make a request to http://jenkinsserver/jenkins/crumbIssuer/api/json. The PowerShell below shows you how to generate a crumb. ``` $user = 'user' $pass = 'password' # The header is the username and password concatenated together $pair = "$($user):$($pass)" # The combined credentials are converted to Base 64 $encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($pair)) # The base 64 credentials are then prefixed with "Basic" $basicAuthValue = "Basic $encodedCreds" # This is passed in the "Authorization" header $Headers = @{ Authorization = $basicAuthValue } # Make a request to get a crumb. This will be returned as JSON $json = Invoke-WebRequest -Uri 'http://jenkinsserver/jenkins/crumbIssuer/api/json' -Headers $Headers # Parse the JSON so we can get the value we need $parsedJson = $json | ConvertFrom-Json # See the value of the crumb Write-Host "The Jenkins crumb is $($parsedJson.crumb)" ``` ## REST API links Now that we have a crumb, we can use it to call the Jenkins REST API. You can find the URL to call to interact with the Jenkins system with the `REST API` link in the bottom right hand corner of each screen. :::figure ![](/docs/img/support/images/restapi.png) ::: In this example we want to trigger the build of a Jenkins project, so we open the project and find that the `REST API` link points us to a URL like http://jenkinsserver/jenkins/job/Run%20a%20script/api/. If we open this link we'll see a page of documentation describing the common operations that are available. In particular we are interested in the link that is embedded in the sentence `to programmatically schedule a new build, post to this URL.` The link takes us to a URL like http://jenkinsserver/jenkins/job/Run%20a%20script/build. :::figure ![](/docs/img/support/images/restapidocs.png) ::: ## Triggering the build We now have the links that we need to trigger a build, and the crumb that is required by Jenkins with each API request. Let's finish off the PowerShell script that will make this final request to start a build in Jenkins. ``` $user = 'user' $pass = 'password' # The header is the username and password concatenated together $pair = "$($user):$($pass)" # The combined credentials are converted to Base 64 $encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($pair)) # The base 64 credentials are then prefixed with "Basic" $basicAuthValue = "Basic $encodedCreds" # This is passed in the "Authorization" header $Headers = @{ Authorization = $basicAuthValue } # Make a request to get a crumb. This will be returned as JSON $json = Invoke-WebRequest -Uri 'http://jenkinsserver/jenkins/crumbIssuer/api/json' -Headers $Headers # Parse the JSON so we can get the value we need $parsedJson = $json | ConvertFrom-Json # See the value of the crumb Write-Host "The Jenkins crumb is $($parsedJson.crumb)" # Extract the crumb filed from the returned json, and assign it to the "Jenkins-Crumb" header $BuildHeaders = @{ "Jenkins-Crumb" = $parsedJson.crumb Authorization = $basicAuthValue } Invoke-WebRequest -Uri "http://jenkinsserver/jenkins/job/Run%20a%20script/build" -Headers $BuildHeaders -Method Post ``` Running this script will display the crumb value, as well as the result of the API call to start a job. Notice that the result was a HTTP 201 code. This code indicates that a job was created on the Jenkins server. ``` PS C:\Users\Matthew\Desktop> .\jenkins.ps1 The Jenkins crumb is 574608b1e95315787b2fa0b74fce2441 StatusCode : 201 StatusDescription : Created Content : {} RawContent : HTTP/1.1 201 Created Date: Tue, 19 Feb 2019 04:46:46 GMT Server: Apache X-Frame-Options: SAMEORIGIN X-Content-Type-Options: nosniff Location: http://jenkinsserver/jenkins/queue/item/11/ Content-L... Headers : {[Date, System.String[]], [Server, System.String[]], [X-Frame-Options, System.String[]], [X-Content-Type-Options, System.String[]]...} RawContentLength : 0 RelationLink : {} ``` ## Learn more - [Jira blog posts](https://octopus.com/blog/tag/jira/1) - [PowerShell blog posts](https://octopus.com/blog/tag/powershell/1) # Assigning a team to a tenant Source: https://octopus.com/docs/tenants/guides/multi-tenant-teams/assign-team-userrole-to-tenant.md The Octo Pet Shop application has two development teams (Avengers and Radical) that are concurrently developing features for the application. Scoping the team to their specific tenant will ensure they can only deploy to their dedicated infrastructure. ## Scoping a team to a tenant Once you've created your team, click on the **User Roles** tab. :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/octopus-teams-avenger.png) ::: Click on **Include User Role** then select the role to include for the team. After the role has been selected, click on **Define scope** :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/octopus-teams-roles.png) ::: Select the tenant and click **Apply** :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/octopus-teams-role-tenant.png) ::: This configures the team with `Release Creator` and `Project Deployer` permissions to any project with the Tenant `OctoPetShop-Team-Avengers` :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/octopus-teams-userroles.png) ::: Previous     Next # Assign tags to tenants Source: https://octopus.com/docs/tenants/guides/tenants-sharing-machine-targets/assign-tags-to-tenants.md We've included the hosting group name in the tenant name to help illustrate the pattern. In the screenshot below, you can see that the tenants have been assigned a Hosting Group tag based on which group they belong to. This tag will make it easy to choose to deploy to all tenants in a group and also map those tenants to the correct infrastructure. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/tenant-list.png) ::: In the tenant overview, click on **Manage Tags** to manage what tags are associated with a tenant. Previous     Next # Tenant variables Source: https://octopus.com/docs/tenants/tenant-variables.md You often want to define variable values that are different for each tenant. For example: - A database server name or connection string - A tenant-specific URL - Contact details for a tenant If you were using an untenanted project, you would have previously defined these values in the project itself. With a tenanted project, you can set these values directly on the tenant for any connected projects. ## Variable templates Tenant variable values can be provided in one of two ways: - [Project variables](#project-variables) - [Common variables](#common-variables) Both of these methods use the [variable templates](/docs/projects/variables/variable-templates) feature. ### Which variable templates apply to each tenant? \{#which-templates-apply} When you connect a tenant to a project, variable templates defined by the project itself, or by included variable sets, will be required by the tenant. 1. Variable Set templates will be collected once - they are considered to be constant for the tenant. Think of these like "custom fields" for your tenants. 1. Project variable templates will be collected once for each project/environment combination the tenant is connected to. Think of these like database connection settings for the specific tenant/project/environment combination. By carefully designing your variable templates you can implement complex multi-tenant deployment scenarios. ## Project variables \{#project-variables} Project variables allow you to specify a variable that a tenant can change. A perfect example would be a connection string or a database server. You define project variables using [project templates](/docs/projects/variables/variable-templates/#project-templates). Project templates can be scoped to zero or more environments. When a project template is not scoped to any environments (unscoped), it applies to all available environments connected to the tenant. This behavior is consistent with how common tenant variables can be scoped. :::figure ![](/docs/img/tenants/images/project-template-screen.png) ::: You can specify the variable type for the project template, just like regular variables. You can also provide a default value which the tenant can overwrite. :::figure ![](/docs/img/tenants/images/project-template-edit.png) ::: You can view and edit values for project's tenants and environments on a single screen. :::figure ![](/docs/img/tenants/images/project-template-screen-edit.png) ::: You can also set values for these variables on the tenant variables screen. :::figure ![](/docs/img/tenants/images/project-template-tenant-value.png) ::: The great thing about project template variables is that they are treated like any other variable, and can be used in steps like regular project variables. :::figure ![](/docs/img/tenants/images/project-template-variable-value-in-step.png) ::: ### Project variable scoping Project variables can be scoped by environment to control which tenant/environment combinations they apply to: - **Unscoped project variables**: When no environment scope is specified on the project template, the variable applies to all environments connected to the tenant - **Scoped project variables**: When one or more environments are specified in the project template scope, the variable only applies to those specific environments You can assign scopes to project variables using the **Tenant Variables** section of the project's page, or the **Tenant Variables** section of the tenant's page. The project variable can then be used across all projects that have a connection to that environment. For more information on assigning scopes, see [assigning scopes](/docs/projects/variables/getting-started/#using-multiple-scopes). ### Project variable permissions When editing project variables for a tenant, a user requires variable editing permission (`VariableEdit`) for the tenant as well as the specific project. For scoped project variables, a user will require variable editing permission for the tenant, the project, and all environments in the variable's scope. To view a project variable, a user will require view permissions to at least one environment in the variable's scope. ## Common variables \{#common-variables} Common variables are similar to project variables. The main difference between the two is that common variables can be used across multiple projects, and they aren't scoped to environments. Common variables are defined using [variable set templates](/docs/projects/variables/variable-templates/#adding-a-variable-template) For example, if we wanted to define an abbreviation for the tenant to use in a deployment or runbook, we can configure a variable template for the variable set. :::figure ![](/docs/img/tenants/images/common-variable-template.png) ::: :::div{.success} To include common variables for a tenant, you must add the variable set in the tenant connected project. ::: Just like project variables, common variable values are supplied at the tenant level. :::figure ![](/docs/img/tenants/images/common-variable-tenant-value.png) ::: ### Common variable scoping Common variables can be scoped by environment to control which environments they apply to: - **Unscoped common variables**: When no environment scope is specified, the variable applies to all environments connected to the tenant - **Scoped common variables**: When one or more environments are specified in the scope, the variable only applies to those specific environments To assign a scope to your common variable, use the **Tenant Variables** section of the project's page, or the **Tenant Variables** section of the tenant's page. The common variable can then be used across all projects that have a connection to that environment. For more information on assigning scopes, see [assigning scopes](/docs/projects/variables/getting-started/#using-multiple-scopes). ### Common variable permissions When editing common variables for a tenant, a user requires variable editing permission (`VariableEdit`) for the tenant as well as **all projects** linked to the tenant. This can be achieved by scoping a user role to: - All projects individually - The correct project groups - A combination of individual projects and project groups :::hint When using the scoped variables feature from octopus version 2025.2, a user will require variable editing permission for the tenant, all projects and all environments to edit common variables for a tenant. In addition to above, a user may also need the (`VariableEdit`) permission scoped to at least one of the following: - For scoped common variables, all environments in the variable's scope - For unscoped common variables, all environments connected to the tenant - Unrestricted `VariableEdit` access for environments To view a common variable, a user will require view permissions to at least one environment in the variable's scope. ::: If you don't have the necessary permissions, you will receive an error like this: :::figure ![](/docs/img/tenants/images/common-variable-permissions-error.png) ::: ## Snapshots \{#tenant-variables-and-snapshots} When you [create a release](/docs/octopus-rest-api/octopus-cli/create-release/) in Octopus we take a snapshot of the deployment process and the current state of the [project variables](/docs/projects/variables). However, we *don't* take a snapshot of tenant variables. This enables you to add new tenants at any time and deploy to them without creating a new release. This means any changes you make to tenant-variables will take immediate effect. # octopus build-information bulk-delete Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-build-information-bulk-delete.md Bulk delete build information in Octopus Deploy ```text Usage: octopus build-information bulk-delete [flags] Flags: -y, --confirm Don't ask for confirmation before deleting the build information. --package-id string the package id of the build information. --version stringArray the version of the build information, may be specified multiple times. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus build-information bulk-delete octopus build-info --package-id ThePackage octopus build-info --package-id ThePackage --version 1.0.0 --version 1.0.1 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Configure Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/configure.md Use the configure command to configure this Octopus instance. **Configure options** ``` Usage: octopus.server configure [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --home=VALUE Home directory --skipDatabaseCompatibilityCheck Skips the database compatibility check --skipDatabaseSchemaUpgradeCheck Skips the database schema upgrade checks. Use with caution --serverNodeName=VALUE Deprecated: set the node name via the create- instance command instead. Unique Server Node name for a clustered environment. --cachePackages=VALUE Days to cache packages for. Default: 20 --cacheLowDiskSpaceThreshold=VALUE Threshold of free disk space (in gigabytes) where packages are cleaned up from cache regardless of age. Default: 1 --cacheDirectoryFullThreshold=VALUE Threshold of the size of the cache folder(in gigabytes) where packages are cleaned up from cache regardless of age. Default: 0 (no limit) --maxConcurrentTasks=VALUE Deprecated: may be removed in a future release (currently has no effect; set Task Cap instead). Maximum number of concurrent tasks that the Octopus Server can execute. Default is 0 (no limit). --upgradeCheck=VALUE Whether checking for upgrades is allowed (true or false) --sendTelemetry=VALUE Whether telemetry data is sent to Octopus (true or false) --commsListenPort=VALUE TCP port that the communications service should listen on --grpcListenPort=VALUE TCP port that the gRPC services should listen on --commsListenWebSocket=VALUE WebSocket prefix that the communications service should listen on (e.g. 'https://+:443/OctopusComms'); set to blank to disable websockets. Refer to https://o- c.to/WebSocketComms. --webListenPrefixes=VALUE Comma-separated list of HTTP.sys listen prefixes (e.g., 'http://localhost/octopus') --webForceSSL=VALUE Whether SSL should be required (HTTP requests get redirected to HTTPS) --requestLoggingEnabled=VALUE Whether to enable logging of web requests --customBundledPackageDirectory=VALUE A custom folder for getting packages (like Calamari) that are normally bundled with Octopus Server --upgradeNotification=VALUE Modifies the visibility of the notification when upgrades are available. Valid values are AlwaysShow, ShowOnlyMajorMinor and NeverShow. --AzureDevOpsIsEnabled=VALUE Set whether Azure DevOps issue tracker integration is enabled. --jiraIsEnabled=VALUE Set whether Jira Integration is enabled. --jiraBaseUrl=VALUE Enter the base url of your Jira instance. Once set, work item references will render as links. --GitHubIsEnabled=VALUE Set whether GitHub issue tracker integration is enabled. --GitHubBaseUrl=VALUE Set the base url for the Git repositories. --webCorsWhitelist=VALUE Comma-separated whitelist of domains that are allowed to retrieve data (empty turns CORS off, * allows all). --xFrameOptions=VALUE A directive to provide in the X-Frame-Options header --xFrameOptionAllowFrom=VALUE (DEPRECATED) A uri to provide in the X-Frame- Options http header in conjunction with the ALLOW-FROM value. The directive allow-from uri for X-Frame-Options has been deprecated and no longer works in modern browsers. --hstsEnabled=VALUE Enables or disables sending the Strict-Transport- Security (HSTS) header. Defaults to false. --hstsMaxAge=VALUE Sets the max-age value (in seconds) of the Strict-Transport-Security (HSTS) header. Defaults to 1 year (31556926 seconds). --webContentSecurityPolicyEnabled=VALUE Enables or disables sending the Content-Security- Policy header. Defaults to true. --webReferrerPolicy=VALUE Sets the 'Referrer-Policy' response header. Defaults to 'no-referrer'. --webServer=VALUE Web server to use when running Octopus (HttpSys, Kestrel) --trustedProxies=VALUE Comma-separated list of IP addresses of trusted proxies --webTrustedRedirectUrls=VALUE Comma-separated list of URLs that are trusted for redirection --autoLoginEnabled=VALUE Enable/disable automatic user login. --selfServiceLoginEditingEnabled=VALUE Enable/disable whether users can edit their own logins. --cookieDomain=VALUE Set a specific domain for issued cookies. --dynamicExtensionsEnabled=VALUE Enable/disable dynamic extensions. --experiencesEnabled=VALUE Enable/disable in-app messaging via [Chameleon](https://www.chameleon.io/) --azureADIsEnabled=VALUE Set the azureAD IsEnabled, used for authentication. --azureADIssuer=VALUE Follow our documentation to find the Issuer for azureAD. --azureADClientId=VALUE Follow our documentation to find the Client ID for azureAD. --azureADClientSecret=VALUE Follow our documentation to find the Client Secret for azureAD. --azureADScope=VALUE Only change this if you need to change the OpenID Connect scope requested by Octopus for azureAD. --azureADNameClaimType=VALUE Only change this if you want to use a different security token claim for the name from azureAD. --azureADAllowAutoUserCreation=VALUE Tell Octopus to automatically create a user account when a person signs in with azureAD. --azureADRoleClaimType=VALUE Tell Octopus how to find the roles in the security token from Azure Active Directory. --activeDirectoryIsEnabled=VALUE Set whether active directory is enabled. --activeDirectoryContainer=VALUE Set the active directory container used for authentication. --webAuthenticationScheme=VALUE When Domain authentication is used, specifies the scheme (Basic, Digest, IntegratedWindowsAuthentication, Negotiate, Ntlm). You will need to restart all Octopus Server nodes in your cluster for these changes to take effect. Please note that using Negotiate or IntegratedWindowsAuthentication [may require additional server configuration](https://- g.octopushq.com/AuthAD) in order to work correctly. --allowFormsAuthenticationForDomainUsers=VALUE When Domain authentication is used, specifies whether the HTML-based username/password form can be used to sign in. --activeDirectorySecurityGroupsEnabled=VALUE When Domain authentication is used, specifies whether to support security groups from AD. --activeDirectoryAllowAutoUserCreation=VALUE Whether unknown users will be automatically created upon successful login. --googleAppsIsEnabled=VALUE Set the googleApps IsEnabled, used for authentication. --googleAppsIssuer=VALUE Follow our documentation to find the Issuer for googleApps. --googleAppsClientId=VALUE Follow our documentation to find the Client ID for googleApps. --googleAppsClientSecret=VALUE Follow our documentation to find the Client Secret for googleApps. --googleAppsScope=VALUE Only change this if you need to change the OpenID Connect scope requested by Octopus for googleApps. --googleAppsNameClaimType=VALUE Only change this if you want to use a different security token claim for the name from googleApps. --googleAppsAllowAutoUserCreation=VALUE Tell Octopus to automatically create a user account when a person signs in with googleApps. --googleAppsHostedDomain=VALUE Tell Octopus which Google Apps domain to trust. --guestloginenabled=VALUE Whether guest login should be enabled --ldapIsEnabled=VALUE Set whether ldap is enabled. --ldapServer=VALUE Set the server URL. --ldapPort=VALUE Set the port using to connect. --ldapSecurityProtocol=VALUE Sets the security protocol to use in securing the connection (None, StartTLS, or SSL). --ldapIgnoreSslErrors=VALUE Sets whether to ignore certificate validation errors. --ldapUsername=VALUE Set the user DN to query LDAP. --ldapPassword=VALUE Set the password to query LDAP (leave empty for anonymous bind). --ldapUserBaseDn=VALUE Set the root distinguished name (DN) to query LDAP for Users. --ldapGroupBaseDn=VALUE Set the root distinguished name (DN) to query LDAP for Groups. --ldapDefaultDomain=VALUE Set the default domain when none is given in the logon form. Optional. --ldapUniqueAccountNameAttribute=VALUE Set the name of the LDAP attribute containing the unique account name, which is used to authenticate via the logon form. This will be 'sAMAccountName' for Active Directory. --ldapUserFilter=VALUE The filter to use when searching valid users. '*' is replaced with a normalized version of the username. --ldapGroupFilter=VALUE The filter to use when searching valid user groups. '*' is replaced with the group name. --ldapNestedGroupFilter=VALUE The filter to use when searching for nested groups. '*' is replaced by the distinguished name of the initial group. --ldapNestedGroupSearchDepth=VALUE Specifies how many levels of nesting will be searched. Set to '0' to disable searching for nested groups. --ldapAllowAutoUserCreation=VALUE Whether unknown users will be automatically created upon successful login. --ldapReferralFollowingEnabled=VALUE Sets whether to allow referral following (this can slow down queries). --ldapReferralHopLimit=VALUE Sets the maximum number of referrals to follow during automatic referral following. --ldapConstraintTimeLimit=VALUE Sets the time limit in seconds for LDAP operations on the directory. '0' specifies no limit. --ldapUserDisplayNameAttribute=VALUE Set the name of the LDAP attribute containing the user's full name. --ldapUserPrincipalNameAttribute=VALUE Set the name of the LDAP attribute containing the user's principal name. --ldapUserMembershipAttribute=VALUE Set the name of the LDAP attribute to use when loading the user's groups. --ldapUserEmailAttribute=VALUE Set the name of the LDAP attribute containing the user's email address. --ldapGroupNameAttribute=VALUE Set the name of the LDAP attribute containing the group's name. --oktaIsEnabled=VALUE Set the okta IsEnabled, used for authentication. --oktaIssuer=VALUE Follow our documentation to find the Issuer for okta. --oktaClientId=VALUE Follow our documentation to find the Client ID for okta. --oktaClientSecret=VALUE Follow our documentation to find the Client Secret for okta. --oktaScope=VALUE Only change this if you need to change the OpenID Connect scope requested by Octopus for okta. --oktaNameClaimType=VALUE Only change this if you want to use a different security token claim for the name from okta. --oktaAllowAutoUserCreation=VALUE Tell Octopus to automatically create a user account when a person signs in with okta. --oktaRoleClaimType=VALUE Tell Octopus how to find the roles in the security token from Okta. --oktaUsernameClaimType=VALUE Tell Octopus how to find the value for the Octopus Username in the Okta token. Defaults to "preferred_username" if left blank. --usernamePasswordIsEnabled=VALUE Set whether Octopus username/password authentication is enabled. --customextension=VALUE File path of a custom extension to load Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example changes the instance home directory to a new folder and enables auto login for instance `OctopusServer`: ``` octopus.server configure --instance="OctopusServer" --home="c:\NewOctopusFolder" --autoLoginEnabled="true" ``` This example changes the TCP port that the communications service listens on to 10953 for instance `OctopusServer`: ``` octopus.server configure --instance="OctopusServer" --commsListenPort="10953" ``` # octopus build-information delete Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-build-information-delete.md Delete a build information in Octopus Deploy ```text Usage: octopus build-information delete [flags] Aliases: delete, del, rm, remove Flags: -y, --confirm Don't ask for confirmation before deleting the build information. -p, --package-id string The Package ID of the build information to delete -v, --version string The version of the build information to delete Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus build-information delete BuildInformation-1 octopus build-info rm BuildInformation-1 octopus build-info del --package-id ThePackage --version 1.2.3 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Create instances Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/create-instance.md Use the create-instance command to register a new instance of the Octopus service. **Create instance options** ``` Usage: octopus.server create-instance [] Where [] is any of: --instance=VALUE Name of the instance to create. If not supplied, creates an instance called OctopusServer. --config=VALUE Path to configuration file to create --home=VALUE [Optional] Path to the home directory - defaults to the same directory as the config file --serverNodeName=VALUE [Optional] Unique Server Node name for a clustered environment - defaults to the machine name Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example creates a new Octopus Server instance on the machine named `MyNewInstance` and sets the home directory: ``` octopus.server create-instance --instance="MyNewInstance" --config="c:\MyNewInstance\MyNewInstance.config" --home="c:\MyNewInstance\Home" ``` # Guest login Source: https://octopus.com/docs/security/authentication/guest-login.md Sometimes you may wish to allow users to access your Octopus Server, without requiring them to create an account. Octopus provides the ability to configure a special guest login for your Octopus Server. When guest login is enabled, the sign in page for the Octopus Web Portal will present users with a choice to either sign in as a guest, or to sign in with their standard account: :::figure ![](/docs/img/security/authentication/images/guest.png) ::: ## Enable guest user via UI {#enable-guest-user} You can enable your guest accounts from the Octopus Web Portal by navigating to **Configuration ➜ Settings ➜ Guest Login**. From there you can click the **Is Enabled** checkbox to enable or disable the Guest account. ![](/docs/img/security/authentication/images/enableguests1.png) ![](/docs/img/security/authentication/images/enableguests2.png) The guest account will now be activated and added to your Octopus Users. ## Guest user permissions The guest user is created as a standard user managed by Octopus. If you are using Active Directory authentication, you don't need a matching AD user account. The user is automatically added to the **Everyone** team. The guest user can be found in the **Users** tab in the **Configuration** area: :::figure ![](/docs/img/security/authentication/images/guestuser.png) ::: As with any standard user, you can [assign the guest account to different teams](/docs/security/users-and-teams) to give them permissions to view projects or environments. :::div{.success} **Guest is read-only** The guest user is designed to be used by multiple people, so it has one additional limitation that other users do not have: the account is completely read-only, despite any roles it might be granted. For example, you could assign the guest user to your **Octopus Administrators** team, which normally gives the user full access to everything. However, for the guest account, this will be read-only - they will be able to view all settings, but they won't be able to change anything. They can't even change their profile settings! Any attempt to make any changes will result in the following message: ![](/docs/img/security/authentication/images/guestuserpermissions.png) ::: :::div{.warning} Please note, if you do add the guest user to your **Octopus Administrators** team, they will be able to view **all** settings and configuration. This includes viewing the license key, viewing the private keys for any uploaded certificates and potentially other information you don't want readable. Depending on your use case, you may want to create a custom role instead. ::: ## Configuring guest login Octopus Server can be configured to enable or disable guest access via the command line, as follows: ```powershell Octopus.Server.exe configure --instance=[your_instance_name] --guestLoginEnabled=true ``` ## Automatic guest login Sometimes, you need to demonstrate an Octopus Server to others, but don't want people to have a choice between the guest login and one of the other login providers. In these cases, by appending `autologin=guest` to the sign in URL, visitors will be automatically logged in as a guest. This requires that the [Guest User is enabled](#enable-guest-user). For e.g.: ```text https://octopus.mydomain.com/app#/users/sign-in?autologin=guest ``` Will allow visitors to `https://octopus.mydomain.com` to be automatically logged in as the guest account. # octopus build-information list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-build-information-list.md List build information in Octopus Deploy ```text Usage: octopus build-information list [flags] Aliases: list, ls Flags: -q, --filter string filter build information by version that contains the filter --latest only return the latest build information according to SemVer -p, --package-id string filter build information by package id Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus build-information list octopus build-information ls octopus build-info list octopus build-info ls --package-id ThePackage octopus build-info ls --package-id ThePackage --filter 1.2.3 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus build-information upload Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-build-information-upload.md upload build information one or more packages to Octopus Deploy. ```text Usage: octopus build-information upload [flags] Aliases: upload, push Flags: -p, --package-id stringArray The ID of the package, may be specified multiple times. Any arguments without flags will be treated as package IDs --version string The version of the package --file string Path to Octopus Build Information Json file --overwrite-mode string Action when a build information already exists. Valid values are 'fail', 'overwrite', 'ignore'. Default is 'fail' Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus build-information upload --package-id SomePackage --version 1.0.0 --file buildinfo.octopus octopus build-information upload SomePackage --version 1.0.0 --file buildinfo.octopus --overwrite-mode overwrite octopus build-information push SomePackage --version 1.0.0 --file buildinfo.octopus octopus build-information upload PkgA PkgB PkgC --version 1.0.0 --file buildinfo.octopus ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Lifecycles and Environments Source: https://octopus.com/docs/best-practices/deployments/lifecycles-and-environments.md [Lifecycles](/docs/releases/lifecycles/) control the order of release promotion through different stages or environments in your pipeline. You can configure a lifecycle to require deployments to **development**, **test**, and **staging** prior to deployments to **production**. They are also used to set [retention policies](/docs/administration/retention-policies) (how long releases are saved) at a per-environment level. Octopus Deploy shares Lifecycles across an entire space. A project references lifecycles via [channels](/docs/releases/channels) and can reference 1 to N lifecycles. Lifecycles contain 1 to N phases, representing a stage in your deployment lifecycle. A phase can have 0 to N environments; for example, you could have a test phase that contains both **development** and **test** environments. Or, you could have a development phase for your **development** environment and a test phase for your **test** environment. ## Manually set your phases A lifecycle with no phases will result in Octopus calculating the phases automatically for you containing all environments. The order of the phases is dependent on the order of the environments on the environment page. :::div{.hint} Every space has a default lifecycle without any phases. We do this to make it easy to get started with a proof of concept. ::: We recommend manually configuring the phases in your lifecycles, including the default lifecycle. - No surprises on the order of environments for your release. - Much more performant, Octopus doesn't have to try to calculate the phases for you. - Control over which phases are optional and retention policies. ## Number of lifecycles Your lifecycles should match your branching strategy. For example, you create a feature branch for new work; then once it is accepted, it is merged into main. That specific feature branch will never make it to **production**. Your lifecycles should reflect that. Suppose you have the typical set of environments, **development**, **test** (or QA), **staging** (or Pre-prod/UAT), and **production** with a feature-branch branching style. In that case, our recommendation is to have at least two lifecycles. - Development or Default lifecycle for feature branches: **development ➜ test** - Release lifecycle for the main branch: **staging ➜ production** Two lifecycles allow you to have your standard workflow, where all the feature branch work goes to **development** and **test**. Once the work is finished and merged into main, the code goes directly to **staging** and then **production**. We **_never_** recommend having a lifecycle with only **production**. Any deployment to **production** must deploy to at least one other environment to verify the fix. Skipping straight to **production**, especially during an emergency, will make a bad situation worse. :::div{.hint} A lifecycle with a single phase is an anti-pattern. We typically see this when users strictly adhere to the [git flow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) branching strategy. If you create a new build, that build should be deployed to at least one environment to ensure it will work in **production**. ::: ## Production approval Do not use the [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) for business owner approvals, CAB (change approval board) approvals, or other Production approvals unless there is no other option. There are multiple reasons for this. - The manual intervention step was designed to pause a deployment to allow a person to review something before proceeding. For example, a DBA needs to review a database delta script before the database deployment runs, or a QA engineer needs to manually verify the new code version before all traffic is redirected to that version in the load balancer. - The manual intervention step was not designed to handle complex approval rules. For example, a person who triggered the deployment isn't the person approving the change, or anyone involved in the code changes cannot approve the deployment to **production**. - The manual intervention step runs during a deployment. That requires you to first start the deployment to **production** to approve the deployment in **production**. This prevents scenarios where you can schedule a deployment at 4 PM to run at 2 AM the next day. We recommend leveraging the ITSM functionality that integrates with [ServiceNow](/docs/approvals/servicenow) or [JIRA Service Management](/docs/approvals/jira-service-management). The ITSM integration is designed with **production** approvals in mind. You can get approval and schedule a deployment to production at 4 PM to run at 2 AM the next day. The approval workflow must be completed prior to the deployment starting. And the approval can follow the rules you built into the ITSM provider. :::div{.hint} The ITSM integration is limited to customers on the new Enterprise license tier. ::: If you cannot leverage ITSM integration, we recommend two approaches to **production** approvals. These are listed in order of precedence. 1. Restrict who can deploy to **production** to your operations or systems admins. Ensure they cannot make changes to the deployment process. When they click the deploy button, that is their "approval" to deploy to **production**. See [common RBAC scenarios](/docs/getting-started/best-practices/users-roles-and-teams) on how to set that up. 2. Add a **prod approval** environment to your lifecycle. An example lifecycle with a **prod approval** environment is **development ➜ test ➜ staging ➜ prod approval ➜ production**. The **prod approval** environment has all the manual intervention steps required for approval. After the release is "deployed" to the **prod approval** environment, it can then be scheduled for a **production** deployment. No manual intervention steps will be required in **production** as all approvals happened earlier. ## Automatic and optional phases Each phase has two different deployment options: - **Manual:** a release must be manually deployed to this phase. - **Automatic:** a release is automatically deployed to the phase as soon as it is ready. An automatic phase is similar to a database trigger; the logic is hidden unless the user knows the visual cue. If you want to use automatic phases, our recommendation is to make it a standard across all lifecycles. Or, clearly name the lifecycle to indicate it has automatic phases. Each phase can also be required or optional. - **Required:** at least one environment must have a successful deployment before the release can proceed. - **Optional:** the release can skip this phase. We recommend having at least one required phase before a **production** environment. :::div{.hint} While possible to configure, you cannot have an optional phase with automatic deployments. Octopus will ignore the automatic setting, and you will be forced to deploy manually. ::: ## Further reading For further reading on lifecycles and environments in Octopus Deploy, please see: - [Lifecycles](/docs/releases/lifecycles) - [Environments](/docs/infrastructure/environments) - [Channels](/docs/releases/channels) # Git credentials Source: https://octopus.com/docs/infrastructure/git-credentials.md Git credentials allow you to define your Git authentication credentials once, and reuse them across projects. You can manage your Git credentials by navigating to **Manage ➜ Git Credentials** in the Octopus Web Portal: :::figure ![The Git credentials area of Octopus Deploy](/docs/img/infrastructure/git-credentials/images/git-credentials.png) ::: ## Edit your Git credentials To edit individual environments, click the Git credential name. From here, it is possible to edit the name, description, change the username and password, set repository restrictions, or delete the Git credential. ## Git credential permissions You can control who has access to view and edit Git credentials by assigning users to Teams and assigning roles to those teams. For more information, see the section on [managing users and teams](/docs/security/users-and-teams). ## Repository Restrictions You can optionally restrict the Git credential to specified repository URL's. These can be complete repository URL’s or you can add a wildcard at the end to include everything under that path. :::figure ![The Git credentials area of Octopus Deploy](/docs/img/infrastructure/git-credentials/images/git-credential-details.png) ::: ## Older versions ```markdown Repository Restrictions is only available in version 2025.4 and later. ``` ## Links [Github issue](https://github.com/OctopusDeploy/Issues/issues/9509) # octopus build-information view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-build-information-view.md View a build information in Octopus Deploy ```text Usage: octopus build-information view {} [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus build-information view BuildInformation-1 octopus build-info view BuildInformation-1 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Automatic user creation Source: https://octopus.com/docs/security/authentication/auto-user-creation.md The Active Directory and OpenID connect providers will, by default, automatically create a new user record for any user who can successfully authenticate but is not currently recognized (based on the checks and fallbacks described [here](#matching-external-identities)). This has its benefits in some scenarios, for example if groups from Active Directory have been assigned access to teams in Octopus, then no administration is required in Octopus for new users who are added to those groups in Active Directory. All the users need to do is login in to Octopus and a user will be created and associated with the correct team(s), based on group assignment. However, this automatic user creation doesn't suit all scenarios so it can be disabled. To disabled automatic user creation for the Active Directory provider use the following command ```powershell Octopus.Server.exe configure --activeDirectoryAllowAutoUserCreation=false ``` The OpenID connect providers also support disabling automatic user creation, through their own options to the configure command. # Tag sets Source: https://octopus.com/docs/tenants/tag-sets.md Tag sets provide the structure for grouping similar tags together, resulting in more orderly metadata. Currently, tags can be applied to tenants, environments, projects, runbooks, and deployment targets, with support for additional resource types planned for the future. :::figure ![An example set of tenant tags](/docs/img/tenants/images/tag-sets.png) ::: :::div{.warning} From Octopus Cloud version **2025.4.3897** we have extended the functionality of tag sets to include the type and scope of a tag set. ::: ## Tag set types {#tag-set-types} There are three types of tag sets that can be created: - **MultiSelect:** Allows selecting multiple predefined tags from the tag set. This is the standard behavior and works for most scenarios. - **SingleSelect:** Allows selecting only one predefined tag from the tag set. Useful when you need to ensure only one option is chosen, such as a cloud provider or deployment tier. - **FreeText:** Allows entering custom text values without requiring predefined tags. The tag set name must match exactly, but the tag value can be any arbitrary text. Useful for dynamic values like region identifiers, customer IDs, or version numbers. When using FreeText, only one value per tag set is allowed. ## Tag set scopes {#tag-set-scopes} Tag sets can be scoped to specific resource types: - **Tenant** - **Environment** - **Project** - **Runbook** - **Target** A tag set can be scoped to multiple resource types, allowing you to use the same tag set across different resources. ## Managing tag sets {#managing-tag-sets} Go to **Deploy ➜ Tag Sets** to create, modify and reorder tag sets and tags. :::figure ![The tenant tag set edit screen](/docs/img/tenants/images/tenant-importance.png) ::: ### Design your tag sets carefully {#design-tag-sets-carefully} We suggest taking some time to design your tag sets based on how you will apply them to your resources. Our recommendation is to make sure each of your tag sets are orthogonal, like different axes on a chart. This kind of design is important because of [how tags are combined when filtering](/docs/tenants/tenant-tags#tag-based-filters). Let's look at example tag sets: - **Importance (VIP, Standard, Trial):** concerned with classifying resources so they can be found easily. - **Hosting Region (West US, East US 2):** concerned with where resources are hosted or deployed. - **Release Ring (Alpha, Beta, Stable):** concerned with when updates are applied. Grouping tag sets makes it easier for each different class of Octopus user to understand which tags apply to their area, and the impact it will have on their deployments. ### Ordering tag sets and tags {#ordering-tag-sets} Order is important for tag sets, and tags within those tag sets. Octopus will sort tag sets and tags based on the order you define in the library. This allows you to tailor the Octopus user interface to your own situation. :::figure ![Ordering of tenant tags shown in the deployment target restrictions section](/docs/img/tenants/images/tag-set-order.png) ::: ### Removing tags If tags are in use by resources, included in project/runbook release [variable snapshots](/docs/releases#variable-snapshot) (via project/variable sets), or captured in published runbooks, you will not be able to delete the relevant tag(s) until these associations are removed. For projects using Config as Code, there are fewer guardrails in place. It's up to you to take care to avoid deleting any tags required by your deployments. Similarly, for runbooks stored in version control, tag usage tracking is not supported, so you must manually ensure tags used by your Config as Code runbooks are not deleted. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. ## Referencing tags {#referencing-tags} Tags are referenced using their **canonical name** which looks like this: `Tag Set Name/Tag Name` For example: - `Release Ring/Alpha` - References the predefined "Alpha" tag in the "Release Ring" tag set - `Importance/VIP` - References the predefined "VIP" tag in the "Importance" tag set - `Region/us-west-2` - For FreeText tag sets, the tag set name "Region" must match exactly, but "us-west-2" can be any arbitrary value You can use canonical names when: - Deploying releases using [build server integrations](/docs/octopus-rest-api/) or the [Octopus CLI](/docs/octopus-rest-api/octopus-cli/). - Scoping variables to tags. - Automating Octopus via the [Octopus REST API](/docs/octopus-rest-api). ## Using tags with different resources - **[Tenant tags](/docs/tenants/tenant-tags):** Learn how to use tags to classify tenants, deploy to multiple tenants, and design multi-tenant deployment processes. - **[Environment tags](/docs/infrastructure/environments#environment-tags):** Learn how to use tags to classify environments by attributes like cloud provider, region, or tier. - **[Project tags](/docs/projects/setting-up-projects#project-tags):** Learn how to use tags to classify and organize projects. - **[Runbook tags](/docs/runbooks#runbook-tags):** Learn how to use tags to organize and filter runbooks with custom metadata. - **[Target tag sets](/docs/infrastructure/deployment-targets/target-tags#target-tag-sets):** Learn how to use tag sets to group and assign target tags to deployment targets. ## Learn more - [Create a tag set via the REST API](/docs/octopus-rest-api/examples/tagsets/create-tagset) - [Deployment patterns blog posts](https://octopus.com/blog/tag/Deployment%20Patterns) # octopus channel Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-channel.md Manage channels in Octopus Deploy ```text Usage: octopus channel [command] Available Commands: create Create a channel help Help about any command Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus channel [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus channel create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus channel create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-channel-create.md Create a channel in Octopus Deploy ```text Usage: octopus channel create [flags] Flags: --default Set this channel as default -d, --description string Description of the channel -l, --lifecycle string The lifecycle to use for the channel -n, --name string Name of the channel -p, --project string Project to create channel in Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus channel create octopus channel create --name "The Channel" --project "The Project" --lifecycle "Default Lifecycle" --default ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Create sanitized database backup Source: https://octopus.com/docs/administration/managing-infrastructure/performance/create-sanitized-database-backup.md When you contact Octopus Deploy support, sometimes we cannot reproduce the performance issue you're experiencing. That can be due to specific circumstances in your instance, which we cannot reproduce. We may ask you to send us a database backup, which will allow us to reproduce the issue and aid in resolving it accurately. We do realize not everyone is comfortable uploading a database backup without getting a chance to sanitize it first. This guide will walk you through how to create a sanitized database backup. 1. Create the database backup. The easiest way to do this is to use SQL Server Management Studio, right-click on the database, select **Tasks ➜ Backup** and follow the wizard to create a full backup. You can also run this script to create the backup. ```sql BACKUP DATABASE [OctopusDeploy] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` 2. Restore the backup to a different server. That new database is the database we are going to sanitize. The easiest way to do this is to use SQL Server Management Studio, right-click on **Databases**, select **Restore Database** and follow the wizard. We recommend naming it `OctopusDeploy_Sanitized` to make it clear as to the intention. You can also run this script to restore the database. ```sql USE [master] RESTORE DATABASE [OctopusDeploy_Sanitized] FROM DISK = N'\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FILE = 2, MOVE N'OctopusDeploy' TO N'YOUR_DATA_DIRECTORY\OctopusDeploy_Sanitized.mdf', MOVE N'OctopusDeploy_log' TO N'YOUR_DATA_DIRECTORY\OctopusDeploy_Sanitized_log.ldf', NOUNLOAD, STATS = 5 GO ``` 3. Disable deployment targets, project triggers, workers, and more. Run the following T-SQL script to disable as much as possible on that database. In the next step, you'll create an Octopus Deploy instance. This script will prevent anything from running in the background that could potentially deploy or change anything in production. ```sql Use [OctopusDeploy_Sanitized] go DELETE FROM OctopusServerNode IF EXISTS (SELECT null FROM sys.tables WHERE name = 'OctopusServerNodeStatus') DELETE FROM OctopusServerNodeStatus UPDATE Subscription SET IsDisabled = 1 UPDATE ProjectTrigger SET IsDisabled = 1 UPDATE Machine SET IsDisabled = 1 IF EXISTS (SELECT null FROM sys.tables WHERE name = 'Worker') UPDATE Worker SET IsDisabled = 1 DELETE FROM ExtensionConfiguration WHERE Id in ('authentication-octopusid', 'jira-integration') ``` 4. On a new server, install Octopus Deploy. The installed version of Octopus Deploy should be the same version of Octopus Deploy you are running in production. After installing Octopus Deploy, the Octopus Manager will appear. You can close that and instead run these scripts to create the Octopus Deploy instance. :::div{.hint} Remember to run these scripts as **Administrator**. ::: ```powershell Set-Location "C:\Program Files\Octopus Deploy\Octopus" .\Octopus.Server.exe create-instance --instance "Octopus" --config "C:\Octopus\OctopusServer.config" --serverNodeName "Sanitized" .\Octopus.Server.exe database --instance "Octopus" --connectionString "Data Source=YOURSERVER;Initial Catalog=OctopusDeploy_Sanitized;Integrated Security=False;User ID=YOURUSER;Password=YOURPASSWORD" ``` :::div{.hint} When you run the above commands, you will get a warning about being unable to decrypt the database. You can ignore that. ::: 5. Sanitize the database. This command will clean out all sensitive variables and PII data and generate a new master key on the database. :::div{.warning} **DO NOT** run this on the database of your production instance. Restoring any data lost after this command has finished executing is only possible using a full database backup along with the associated Master Key. ::: ```powershell Set-Location "C:\Program Files\Octopus Deploy\Octopus" .\Octopus.Server.exe lost-master-key --instance "Octopus" --iReallyWantToResetAllMySensitiveData --upgradeDatabase --scrubPii --iHaveBackedUpMyDatabase ``` 6. Create a new database backup. Create a new backup for the `OctopusDeploy_Sanitized` database. This is the backup you will upload to Octopus. ```sql BACKUP DATABASE [OctopusDeploy_Sanitized] TO DISK = '\\SomeServer\SomeDrive\OctopusDeploy.bak' WITH FORMAT; ``` 7. Upload your database backup. In your email or forum thread with Octopus support, we will provide you with a secure and private link to upload your database backup. Only we have access to view and download files uploaded to this location, and we will only allow upload access to you. We will also ensure your forum thread is marked as private if it hasn't already been to ensure only you and our team can see the link. # octopus config Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-config.md Manage the CLI configuration ```text Usage: octopus config [command] Available Commands: get Gets the value of config key for Octopus CLI help Help about any command list List values from config file set Set will write the value for given key to Octopus CLI config file Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus config [command] --help" for more information about a command. ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus config get Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-config-get.md Gets the value of config key for Octopus CLI. ```text Usage: octopus config get [key] [flags] Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Recovering after losing your Octopus Server and Master Key Source: https://octopus.com/docs/administration/managing-infrastructure/lost-master-key.md Sometimes the worst possible thing happens. The machine hosting Octopus Server dies irrecoverably, and you've discovered you don't have your Master Key! Whilst you cannot recover the data encrypted with your missing Master Key, this guide will help you get back up and running again. If you are reading this page: [**please back up your Master Key**](/docs/octopus-rest-api/octopus.server.exe-command-line/show-master-key) ## Recover the Master Key The fastest and easiest way to get up and running is to recover the Master Key. The only way to recover the Master Key is to get the dead machine up and running again. The Master Key is stored in the Octopus Server configuration file and encrypted using the machine's encryption key. Simply having a copy of the config file is not enough. The Master Key can only be decrypted by the machine where the config file came from. ## What is lost Octopus [encrypts important and sensitive data](/docs/security/data-encryption) using a Master Key. This includes: - The Octopus Server X.509 certificate which is used for [Octopus to Tentacle communication](/docs/security/octopus-tentacle-communication) - this means your Tentacles won't trust your Octopus Server any more. - Sensitive variable values, wherever you have defined them. - Sensitive values in your deployment processes, like the password for a custom IIS App Pool user account. - Sensitive values in your deployment targets, like the password for creating [Offline Drops](/docs/infrastructure/deployment-targets/offline-package-drop). ## Recovering with a new Master Key If you are confident with Octopus you can follow these steps to get back up and going. Otherwise, please get in contact with our [support team](https://octopus.com/support) so we can be available to help get you up and going. ### Step 1. Back up before you start Make sure to [back up everything](/docs/administration/data/backup-and-restore) before you start this process. At least this will help you start the process again from a known position. ### Step 2. Install Octopus Server on a new machine Provision a new machine and install Octopus Server on it just like you would normally **except** you won't be able to point it at your existing database because you don't have the Master Key. We are going to get your new Octopus Server up and running on a new database, and then trick it into pointing at your old database. 1. Install Octopus Server. 1. Either point it at a blank database you've created for this purpose, or let Octopus create a database for itself. **We will delete this afterwards.** 1. Load the Octopus Server user interface, click around a little bit, and make sure it looks like a healthy but empty instance of Octopus Server. 1. Run `Octopus.Server.exe service --stop` to stop the Octopus Server (we are going to reconfigure it). 1. Run `Octopus.Server.exe database --connectionString="YOUR-CONNECTION-STRING"` to point this Octopus Server at the database you are trying to recover. 1. Run `Octopus.Server.exe lost-master-key` and carefully follow the prompts. This will take you through each step and generate a detailed report of what has happened. 1. Run `Octopus.Server.exe service --start` to start the Octopus Server running against the recovered database. **Please read the report carefully and get in touch with us if anything seems out of the ordinary. Back up your new Octopus Server certificate and Master Key!** ### Step 3. Restore trust with your Tentacles You will need to sign in to all the machines running Tentacle and run a command to make it trust the new Octopus Server certificate. Run this script on each machine running Tentacle: ``` Tentacle.exe service --stop Tentacle.exe configure --reset-trust --trust="YOUR-NEW-OCTOPUS-SERVER-THUMBPRINT" Tentacle.exe service --start ``` These commands will: - Stop the Tentacle agent. - Clear all trusts so the Tentacle won't trust the old server certificate. - Configures the Tentacle to trust the new Octopus Server certificate. - Starts the Tentacle agent again. After this you should perform a health check on your Infrastructure and fix any problems that come up. ### Step 4. Re-enter all the sensitive values There is no way to recover this data. You will need to go through and re-enter any sensitive values. ### Step 5. Back up your Octopus Server certificate and Master Key You may have done this earlier in the process. If not, now is a great time to securely back up your Master Key and Octopus Server certificate! ### Test your backup Now is a great time to test your backup process worked and ensure you can restore quickly next time when a serious issue occurs. A backup isn't real unless you verify you can restore from it. Take your fresh Octopus backup and recently secured Master Key and attempt to restore your Octopus Server somewhere else to validate it will work when you need it to. # Upgrading Octopus Source: https://octopus.com/docs/administration/upgrading.md This guide is for customers managing their self-hosted installation of Octopus. If you are using Octopus Cloud, we take care of everything for you, which means you don't need to worry about upgrades, and you have early access to all the latest features. ## About this guide This guide provides various upgrade scenarios with the goal of mitigating risk. ## Core concepts Octopus Deploy connects to a SQL Server database, and can be hosted: - As a Windows Service, installed via an MSI. - In a [Linux](/docs/installation/octopus-server-linux-container) container. ### Upgrade process When running on Windows, the typical (manual) upgrade process is: - Run the MSI to install the latest binaries. - After the MSI finishes, it will close, and the **Octopus Manager** is launched to update each instance. Once the **Octopus Manager** starts the upgrade process, downtime *will* occur. The upgrade should take anywhere from a minute to 30 minutes to complete depending on the number of database changes, the database's size, and compute resources. A good rule of thumb is: the greater the delta between versions, the longer the downtime. An upgrade from 2019.2.1 to 2020.5.1 will generally take longer than an upgrade from 2020.4.1 to 2020.5.1. :::div{.hint} [Automating your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) will help reduce the total upgrade time. Automation also mitigates risk, as all steps, including backups, will be followed. We've found companies who automate their upgrade process are much more likely to stay up to date. The smaller the delta between versions, the faster the upgrade. ::: #### Upgrades and the Service Watchdog If you are using the [Service Watchdog](/docs/administration/managing-infrastructure/service-watchdog), you will need to cancel it before you start your upgrade and recreate it after the upgrade is finished. Please see [documentation on canceling the watchdog](/docs/administration/managing-infrastructure/service-watchdog/#ServiceWatchdog-CancelingTheWatchdog). ### Upgrading a highly available Octopus Deploy instance You are required to install the same MSI on all servers or nodes in your highly available Octopus Deploy instance. The MSI installs the updated binaries, which include the latest database upgrade scripts. Unlike the binaries, the database upgrade only needs to happen once. :::div{.warning} As of **2023.2.9755**, a database upgrade will abort if Octopus detects there are nodes still running. Ensure all nodes are properly shutdown and try again. ::: :::div{.warning} A small outage window will occur when upgrading a highly available Octopus Deploy instance. The outage window will happen between when you shut down all the nodes and upgrade the first node. The window duration depends on the number of database changes, the size of the database, and compute resources. It is highly recommended to [automate your upgrade process](/docs/administration/upgrading/guide/automate-upgrades) to reduce that outage window. ::: ### Components The Windows Service is split across multiple folders to make upgrading easy and low risk. - **Install Location**: By default, the install location for Octopus on Windows is `C:\Program Files\Octopus Deploy\Octopus`. The install location contains the binaries for Octopus Deploy and is updated by the MSI. - **SQL Server Database**: Since `Octopus Deploy 3.x`, the back-end database has been SQL Server. Each update can contain 0 to N database scripts embedded in a .dll in the install location. The **Octopus Manager** invokes those database scripts automatically. - **Home Folder**: The home folder stores configuration, logs, and other items unique to your instance. The home folder is separate from the install location to make it easier to upgrade, downgrade, uninstall/reinstall without affecting your instance. The default location of the home folder is `C:\Octopus`. Except in rare cases, this folder is left unchanged by the upgrade process. - **Instance Information**: The Octopus Deploy Manager allows you to configure 1 to N instances per Windows Server. The **Octopus Manager** stores a list of all the instances in the `C:\ProgramData\Octopus\OctopusServer\Instances` folder. Except in rare cases, this folder is left unchanged by the upgrade process. - **Server Folders**: Logs, artifacts, packages, and event exports are too big for Octopus Deploy to store in a SQL Server database. The server folders are sub-folders in `C:\Octopus\`. Except in rare cases, these folders are left unchanged by an upgrade. - **Tentacles**: Octopus Deploy connects to deployment targets via the Tentacle service. Each version of Octopus Deploy includes a specific Tentacle version. Tentacle upgrades do not occur until *after* the Octopus Deploy server is upgraded. Tentacle upgrades are optional; any Tentacle greater than 4.x will work [with any modern version of Octopus Deploy](/docs/support/compatibility). We recommend you upgrade them to get the latest bug fixes and security patches when convenient. - **Calamari**: The Tentacles facilitate communication between Octopus Deploy and the deployment targets. Calamari is the software that does the actual deployments. Calamari and Octopus Deploy are coupled together. Calamari is upgraded automatically during the first deployment to a target. ## Supported Octopus Deploy Server Versions Each self-hosted major.minor release of Octopus Deploy will receive *critical patches and support* for a period of **six months.** For example, 2025.4 was released in December 2025 and will be supported through May 2026. All new releases of Octopus Deploy will run in Octopus Cloud first for at least one quarter. As a result, Octopus Cloud is always at least one version ahead of the self-hosted version. Because of that, we always recommend using the latest available release for your self-hosted installation of Octopus. Please see the [Octopus.com/downloads](https://octopus.com/downloads) to download the latest version of Octopus Deploy. For more details, please refer to our [blog post announcement from 2020](https://octopus.com/blog/releases-and-lts), when we introduced this release cadence. ## How we version Octopus Deploy We use our version numbering scheme to help you understand the type of changes we have introduced between two versions of Octopus Deploy: - **Major version change**: Beware of major breaking changes and new features. - Example **Octopus 2019.x** to **Octopus 2020.x**. - Breaking changes means downgrading will be difficult. Check our release notes for more details. - In **Octopus 2019.1** Spaces was introduced, changing the folder structure and user management. - In **Octopus 2020.1** We started requiring SQL Server 2016 or higher - **Minor version change**: New features, the potential for minor breaking changes, and database changes. - Example **Octopus 2020.1.x** to **Octopus 2020.2.x**. - Upgrading should be easy, but rolling back will require restoring your database. Check our release notes for more details. - We will usually make changes to the database schema. - We will usually make changes to the API, being backward compatible wherever possible. - **Build version change**: Small bug fixes and computational logic changes. - Example **Octopus 2021.1.7500** to **Octopus 2021.1.7595** (upgrade or downgrade). - Patches should be **safe to update, safe to roll back*. - We will rarely make database changes, only if we absolutely must to patch a critical bug. If we do, the change will be safe for any other patches of the same release. - We may decide to make API changes, but any changes will be backward compatible. If you're interested in more details about versioning Octopus, check out the blog post [Octopus Deploy version changes for 2018](https://octopus.com/blog/version-change-2018). ## Scenarios Please pick from one of these upgrade scenarios: - [Upgrading minor and patch releases](/docs/administration/upgrading/guide/upgrading-minor-and-patch-releases) - [Upgrading major releases](/docs/administration/upgrading/guide/upgrading-major-releases) - [Upgrading host OS or .NET version](/docs/administration/upgrading/guide/upgrade-host-os-or-net) - Legacy Upgrades - [Upgrading from Octopus 4.x or 2018.x to latest version](/docs/administration/upgrading/guide/upgrading-from-octopus-4.x-2018.x-to-modern) - [Upgrading from Octopus 3.x to latest version](/docs/administration/upgrading/guide/upgrading-from-octopus-3.x-to-modern) - [Upgrade from 2.6.5 to 2018.10.x](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.6.5-2018.10lts) - [Upgrade from 2.x to 2.6.5](/docs/administration/upgrading/legacy/upgrading-from-octopus-2.x-2.6.5) - [Upgrade from 1.6 to 2.6.5](/docs/administration/upgrading/legacy/upgrading-from-octopus-1.6-2.6.5) :::div{.hint} Since Octopus Deploy 3.x, the backing database is SQL Server. Prior to Octopus Deploy 3.x, the backing database was RavenDB. That is why we consider any version released before 3.x a legacy upgrade. ::: ## Mitigating risk The best way to mitigate risk is to automate the upgrade and/or create a test instance. Automation ensures all steps, including backups, are followed for every upgrade. A test instance allows you to test out upgrades and new features without affecting your main instance. We also recommend performing a System Integrity Check on your live instance before attempting to upgrade. If the integrity check fails, please contact [support](https://octopus.com/support) with the [raw output of the task](/docs/support/get-the-raw-output-from-a-task), and we can get that fixed for you. - [Perform a System Integrity Check](/docs/administration/managing-infrastructure/diagnostics) - [Automating upgrades](/docs/administration/upgrading/guide/automate-upgrades) - [Create a test instance](/docs/administration/upgrading/guide/creating-test-instance) # Offload Work to Workers Source: https://octopus.com/docs/best-practices/octopus-administration/worker-configuration.md [Workers](/docs/infrastructure/workers) were introduced in **Octopus Server 2018.7** as a way to offload work done by the Octopus Server. Worker pools are groups of workers. You configure your deployment or runbook to run on worker pools. Workers serve as "jump boxes" between the server and targets. They are used when the Tentacle agent cannot be installed directly on the target, such as databases, Azure Web Apps, or K8s clusters. Workers are needed because the scripts to update the database schema or the kubectl scripts to change the K8s cluster have to run somewhere. :::figure ![Workers diagram](/docs/img/shared-content/concepts/images/workers-diagram-img.png) ::: When you do a deployment or a runbook run with workers, a worker is leased from the pool; the work is done, then the worker is added back into the pool. The vast majority of the time, the same worker is used for a single runbook run or deployment. But the worker can change in the middle of the deployment; you should design your process around that assumption. :::div{.hint} The leasing algorithm is not round-robin. It looks for the worker with the least amount of active leases. Multiple concurrent deployments or runbook runs can run on a worker. ::: Some important items to note about workers: - Unlike deployment targets, workers are designed to run multiple tasks concurrently. - **Octopus Server 2020.1** added the [Worker Pool Variable Type](/docs/projects/variables/worker-pool-variables) making it possible to scope worker pools to environments. - **Octopus Server 2020.2** added the [execution container for workers](/docs/projects/steps/execution-containers-for-workers) feature, making it easier to manage software dependencies. - We provide a [Tentacle Docker image](https://hub.docker.com/repository/docker/octopusdeploy/tentacle) that can be configured to run as a worker. ## Provided Workers The Octopus Server includes a [built-in worker](/docs/infrastructure/workers/built-in-worker). When you configure a deployment or runbook to run tasks on the server, it is handing off that work to the built-in worker. :::div{.hint} Octopus Cloud is running the Octopus Linux container. To ensure maximum cross-compatibility with both Windows and Linux, the built-in worker is disabled on Octopus Cloud. Instead, we provide you with the ability to choose from 2 [dynamic workers](/docs/infrastructure/workers/dynamic-worker-pools), Windows Server 2019 and Ubuntu 22.04. Each worker type is a different worker pool. ::: The built-in worker and [dynamic workers](/docs/infrastructure/workers/dynamic-worker-pools) were created to help get you started. Using them at scale will quickly expose their flaws. - The built-in worker will run under the same account as the Octopus Deploy service. By default, that is `Local System`. You can change it to run under a different account, but it can only run under one account. You cannot change that account during a deployment or runbook run. - The built-in worker may or may not be in the same data center as your deployment targets. You could experience some significant latency. - Dynamic workers and built-in workers are limited to the software installed on the host servers. This includes specific software. Upgrading to a newer version results in a "big bang" change in your CI/CD pipeline which increases risk. - The IP address assigned to dynamic workers will change at most once an hour and at least once every 72 hours. - Dynamic workers are assigned to an entire instance, not just a space. We have seen cases where a deployment blocks on one space, blocking a deployment on another space because they both used the same dynamic worker. - There is only one dynamic worker per pool. Workers have some blocking tasks (install Calamari and downloading a package). If a process needs to acquire a mutex for that blocking task, it has to wait until other tasks are done. ## Workers for Octopus at scale If you plan on using Octopus Deploy at scale, [disable the built-in worker](/docs/infrastructure/workers/built-in-worker/#switching-off-the-built-in-worker) for self-hosted or stop using the dynamic workers and host your own workers and worker pools. - Establish an easy-to-understand naming convention for workers. For example, `p-db-omaha-worker-01` for a worker located in Omaha to do database deployments on Production. - Configure workers to run in the same data centers as your deployment targets. For example, if you are hosting Octopus Deploy in an on-premises data center, but you are deploying to the US-central region in Azure, then create workers to run in that region in Azure. - Name the worker pool to match the purpose, location, and environment. For example, `Azure Central US Production Worker Pool`. - When possible, configure the underlying Tentacle Windows service as a specific Active Directory account to better control the permissions. Consider not only what it should have access to (this worker can run SQL Scripts on a Dev SQL Server) and what it shouldn't have access to (this worker cannot run SQL Scripts on any Test or Production SQL Server). - For redundancy, have at least two workers per pool. - Whenever possible, leverage [execution container for workers](/docs/projects/steps/execution-containers-for-workers) to limit the amount of software to install and maintain on the workers. ## Compute resources required Workers don't need a lot of compute resources. Our recommendations are: - 1 CPU / 2 GB of RAM for Linux workers (both server and container) - 2 CPU / 2 GB of RAM for Windows workers Naturally, the more compute resources you add, the faster the worker will run. Monitor the resources of each worker and increase when needed. Or add more workers to spread out the load. ## The difference between workers and high availability nodes With workers' introduction, there was some confusion as to the difference between a worker and a [high availability node](/docs/administration/high-availability). They are not the same thing. Here are the key differences. - A high availability node runs the Octopus Server service while a worker is running the Octopus Tentacle service. - A high availability node is responsible for hosting the Octopus Deploy UI while a worker does no such thing. - A high availability node orchestrates and coordinates deployments and runbook runs, while a worker may be used to run 1 to N steps in a deployment or runbook run. Think of the high availability node as the manager and the worker as the worker doing the work. ## The difference between workers and deployment targets Behind the scenes, there isn't much difference between a deployment target or a worker, as both are Tentacle agents. It is a matter of how they are registered. Deployment targets are registered to environments while workers are registered to worker pools. It is how the server hosting the Tentacle will be used. Deployment targets are for deploying to (web server, application server, etc.) while workers are used as a means to deploy to a deployment target. A listening Tentacle can be registered as both a worker and a deployment target. We don't recommend it, but it is possible. :::div{.hint} All Octopus Cloud and self-hosted Server, Data Center, and Standard licenses offer unlimited workers. ::: ## Further reading For further reading on workers in Octopus Deploy please see: - [Workers](/docs/infrastructure/workers) - [Built-in Worker](/docs/infrastructure/workers/built-in-worker) - [Worker Pool Variable Type](/docs/projects/variables/worker-pool-variables) - [Execution Container for Workers](/docs/projects/steps/execution-containers-for-workers) - [Dynamic Workers](/docs/infrastructure/workers/dynamic-worker-pools) # Azure Source: https://octopus.com/docs/deployments/azure.md Octopus Deploy can help you perform repeatable and controlled deployments of your applications into Azure. Out-of-the-box, Octopus provides built-in steps to deploy to the following Azure products: - [Azure Web applications](/docs/deployments/azure/deploying-a-package-to-an-azure-web-app/) and [web jobs](/docs/deployments/azure/deploying-a-package-to-an-azure-web-app/deploying-web-jobs) (also works for [Azure Functions](https://octopus.com/blog/azure-functions)). - [Resource Group Templates](/docs/runbooks/runbook-examples/azure/resource-groups). - [Azure Cloud Services](/docs/deployments/azure/cloud-services). - [Service Fabric](/docs/deployments/azure/service-fabric). - [Executing PowerShell scripts using the Azure cmdlets](/docs/deployments/custom-scripts/azure-powershell-scripts/). Follow our guide on [running Azure PowerShell scripts](/docs/deployments/azure/running-azure-powershell). - The one you are looking for is not here? [Share your product feedback](https://roadmap.octopus.com/submit-idea) to let us know how we can help you have happy deployments. With [runbooks](/docs/runbooks), Octopus provides built-in steps to help manage your infrastructure in Azure: - [Resource Group Templates](/docs/runbooks/runbook-examples/azure/resource-groups). :::div{.hint} **Where do Azure steps execute?** All steps that target an Azure deployment target (including script steps) execute on a worker. By default, that will be the built-in worker in the Octopus Server. Learn about [workers](/docs/infrastructure/workers) and the different configuration options. ::: ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). - [Azure blog posts](https://octopus.com/blog/tag/azure/1). - [Azure runbook examples](/docs/runbooks/runbook-examples/azure). # Connecting securely with client certificates Source: https://octopus.com/docs/deployments/azure/service-fabric/connecting-securely-with-client-certificates.md As part of Service Fabric step templates, Octopus allows you to securely connect to a secure cluster by using client certificates. This page assumes you have configured your Service Fabric cluster in secure mode and have already configured your primary/server certificate when setting up the cluster (and have used an Azure Key Vault to store the server certificate thumbprint). :::div{.warning} This example will use a self-signed certificate for testing purposes, and assumes you are using Azure to host your Service Fabric cluster. ::: During a Service Fabric deployment that uses Client Certificates for authentication, Calamari will set the following connection parameters before attempting to connect with the Service Fabric cluster: ```powershell $ClusterConnectionParameters["ServerCertThumbprint"] = $OctopusFabricServerCertThumbprint $ClusterConnectionParameters["X509Credential"] = $true $ClusterConnectionParameters["StoreLocation"] = $OctopusFabricCertificateStoreLocation $ClusterConnectionParameters["StoreName"] = $OctopusFabricCertificateStoreName $ClusterConnectionParameters["FindType"] = $OctopusFabricCertificateFindType $ClusterConnectionParameters["FindValue"] = $OctopusFabricClientCertThumbprint ``` These PowerShell variables correspond to the following Octopus variables: | PowerShell Variable | Octopus Variable | | -------------------------------------- | ------------------------------------------------------ | | $OctopusFabricCertificateFindType | Octopus.Action.ServiceFabric.CertificateFindType | | $OctopusFabricCertificateStoreLocation | Octopus.Action.ServiceFabric.CertificateStoreLocation | | $OctopusFabricCertificateStoreName | Octopus.Action.ServiceFabric.CertificateStoreName | | $OctopusFabricClientCertThumbprint | Octopus.Action.ServiceFabric.ClientCertThumbprint | | $OctopusFabricServerCertThumbprint | Octopus.Action.ServiceFabric.ServerCertThumbprint | It is these values and variables that we will be discussing below. ## Step 1: Get the DNS name of your Service Fabric cluster The following steps will need the DNS name of your Service Fabric cluster. The DNS name for Azure Service Fabric clusters can be found as the "Client connection endpoint" field on the "Overview" tab of your Azure Service Fabric cluster in the Azure portal. An example of a Service Fabric cluster's DNS name is: `demo-octopus-sf1-secure.australiasoutheast.cloudapp.azure.com` ## Step 2: Generate the client certificate :::div{.warning} Azure have recently updated their **Key vaults > Certificates** UI to allow generating self-signed certificates. If you're deploying to Azure and wish to generate a self-signed certificate for testing, please use their portal functions or cmdlets. ::: Using PowerShell, you can easily generate a self-signed certificate for testing purposes. In this case, Octopus Server (the client) will be connecting to Service Fabric (the server) during a deployment. Therefore, this client certificate will need to reside on your Octopus Server machine. If you do not install this certificate manually, Octopus will attempt to install it automatically as part of your [Server Fabric target's](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets) health check. :::div{.hint} In this PowerShell, we print the value of the certificate's thumbprint. Be sure to remember this thumbprint value, as you will need to store it in your Azure Key Vault used by Service Fabric: ::: ```powershell $dnsName = "demo-octopus-sf1-secure.australiasoutheast.cloudapp.azure.com" $cert = New-SelfSignedCertificate -DnsName $dnsName -CertStoreLocation "cert:\LocalMachine\My" Write-Host $cert.Thumbprint $password = ConvertTo-SecureString -String "MySuperSecurePasswordGoesHere" -Force -AsPlainText Export-PfxCertificate -Cert $cert -FilePath "C:\_export\demo-octopus-sf1-secure-server-cert.pfx" -Password $password ``` We can then take the exported certificate and thumbprint, and complete the following steps. :::div{.hint} **Location matters!** By default, the Service Fabric steps assume the certificate store location is `LocalMachine` and that the certificate store name is `MY`. If you have this client certificate installed somewhere else, you will need to override these defaults using the variables mentioned below. ::: To override certificate settings used when connecting to Service Fabric, the following variables are available: | Variable | Default | Description | | --------------------------------------------------------- | ---------------- | ---------------------------------------- | | Octopus.Action.ServiceFabric.CertificateStoreLocation | LocalMachine | The store location that Octopus will pass as the 'StoreLocation' argument of the Service Fabric connection properties during a deployment (see the `StoreLocation` section of the [Connect-ServiceFabricCluster documentation](https://docs.microsoft.com/en-us/powershell/module/servicefabric/connect-servicefabriccluster))| | Octopus.Action.ServiceFabric.CertificateStoreName | MY | The store name that Octopus will pass as the 'StoreName' argument of the Service Fabric connection properties during a deployment (see the `StoreName` section of the [Connect-ServiceFabricCluster documentation](https://docs.microsoft.com/en-us/powershell/module/servicefabric/connect-servicefabriccluster)) | | Octopus.Action.ServiceFabric.CertificateFindType | FindByThumbprint | The type of FindValue for searching certificates in the Azure certificate store (see the `FindType` section of the [Connect-ServiceFabricCluster documentation](https://docs.microsoft.com/en-us/powershell/module/servicefabric/connect-servicefabriccluster)) | | Octopus.Action.ServiceFabric.CertificateFindValueOverride | | The FindValue for searching certificates in the Azure certificate store (see the `FindValue` section of the [Connect-ServiceFabricCluster documentation](https://docs.microsoft.com/en-us/powershell/module/servicefabric/connect-servicefabriccluster)) | You do not need to override these variables by default. However, they *are* available if you require more flexibility over the default client certificate connection parameters. ## Step 3: Install the client certificate Now that you have a client certificate and thumbprint, the following steps can be completed: 1. Install the certificate on your Octopus Server (the server that will be deploying to your Service Fabric cluster). 2. Upload the certificate to your Azure Key Vault (the vault that Service Fabric is configured to communicate with). 3. Add the thumbprint as a "Client certificate" to your Service Fabric security settings (Authentication type = **Admin client**, Authorization method = **Certificate thumbprint**). The client certificate should now be setup for your Octopus Server machine to communicate with your Service Fabric cluster. ## Step 4: Configure and run a deployment step In Octopus, Service Fabric deployment steps that use "Client Certificate" as the security mode will need you to enter the Server Certificate thumbprint and select the Client Certificate variable. :::figure ![](/docs/img/deployments/azure/service-fabric/connecting-securely-with-client-certificates/secure-client-certs-template-b.png) ::: ## Connection troubleshooting Calamari uses the [Connect-ServiceFabricCluster cmdlet](https://docs.microsoft.com/en-us/powershell/module/servicefabric/connect-servicefabriccluster?view=azureservicefabricps) to connect to your Service Fabric cluster. The connection parameters are logged (Verbose) at the time of a deployment to help if you need to debug connection problems to your Service Fabric cluster. If you wish to learn more about how Octopus connects securely to Service Fabric clusters, the PowerShell scripts used by Calamari can be [viewed here](https://github.com/OctopusDeploy/Sashimi.AzureServiceFabric/blob/main/source/Calamari/Scripts/AzureServiceFabricContext.ps1). ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # Import certificates into Tomcat Source: https://octopus.com/docs/deployments/certificates/tomcat-certificate-import.md With the `Deploy a certificate to Tomcat` step, certificates managed by Octopus can be configured as part of a Tomcat instance to allow HTTPS traffic to be served. ## Prerequisites Before a certificate can be deployed to a Tomcat instance, the certificate itself must be uploaded to Octopus. [Add a certificate to Octopus](/docs/deployments/certificates/add-certificate) provides instructions on how to add a new certificate to the Octopus library. Once uploaded, the certificate has to be referenced by a variable. [Certificate variables](/docs/projects/variables/certificate-variables) provides instructions on how to define a certificate variable. ## Deploying a certificate to Tomcat The `Deploy a certificate to Tomcat` step is used to deploy a certificate managed by Octopus to a Tomcat instance. At a minimum, the `Tomcat Location` and `Tomcat Certificate` sections must be populated to deploy a certificate. ### Tomcat location fields The `Tomcat Location` section defines two fields. The first field is the `Tomcat CATALINA_HOME path`. This is the location of the root directory of the "binary" distribution of Tomcat. Specifically Octopus looks for the file `$CATALINA_HOME/lib/catalina.jar`, which is used to determine the Tomcat version. The second field is the `Tomcat CATALINA_BASE path`. This is the location of the root directory of the "active configuration" of Tomcat. Specifically Octopus looks for the `$CATALINA_BASE/conf/server.xml` file, which will be edited to reference the certificate being deployed. When a single binary distribution of Tomcat is shared among multiple users on the same server, `CATALINA_HOME` will be a different value to `CATALINA_BASE`. When the binary distribution of Tomcat is hosting only one instance, `CATALINA_HOME` and `CATALINA_BASE` will reference the same directory, and in this case the `Tomcat CATALINA_BASE path` field is optional. ### Tomcat certificate fields The `Tomcat Certificate` section defines the details of the certificate being deployed. The `Select certificate variable` field provides a list of all the certificate variables defined in the project. [Certificate Variables](/docs/projects/variables/certificate-variables) provides instructions on how to define a certificate variable. The `Tomcat service name` field references that name of the service in the `conf/server.xml` file that the certificate will be deployed to. By default, the service is called `Catalina`, as defined by the `name` attribute in the `` element. The `SSL implementation` field lists the standard Tomcat SSL implementations. Different versions of Tomcat support different SSL implementations. You can find more information on the implementations supported by each version of Tomcat in the following Tomcat documentation links: * [Tomcat 9](https://tomcat.apache.org/tomcat-9.0-doc/ssl-howto.html#Edit_the_Tomcat_Configuration_File) * [Tomcat 8.5](https://tomcat.apache.org/tomcat-8.5-doc/ssl-howto.html#Edit_the_Tomcat_Configuration_File) * [Tomcat 8](https://tomcat.apache.org/tomcat-8.0-doc/ssl-howto.html#Edit_the_Tomcat_Configuration_File) * [Tomcat 7](https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html#Edit_the_Tomcat_Configuration_File) :::div{.hint} If you select an SSL implementation that is not supported by the version of Tomcat that the certificate is being deployed to, an error will be reported at deploy time. ::: The `HTTPS port` field defines the HTTPS port of the Tomcat connector that will be created or edited by the step. The port is considered to be a connector identifier. This means that if a `` element exists with the specified port, it will be updated with the new certificate. If a `` element does not exist with that port, a new connector will be created. :::div{.hint} Existing `` elements can only be updated with the same SSL implementation. Octopus does not support changing the SSL implementation of an existing connector. ::: ## Advanced options A number of optional advanced options can be defined when deploying a certificate to Tomcat. These include configuring SNI hostnames, certificate passwords, aliases and overriding the default filenames used when saving the certificates. ### Tomcat SNI options Server Name Indication (SNI) is supported by Tomcat 8.5 and above to map a certificate to the hostname of the request. In this way a single Tomcat instance can be configured with multiple certificates on a single port. The `Certificate SNI hostname` field defines the hostname that the deployed certificate will map to. If left blank, this value is assumed to be `_default_`, which is the default value for the `defaultSSLHostConfigName` attribute on the `` element. For example, when set to the hostname `example.org`, the certificate being deployed will be used to secure requests to URLs like `https://example.org`. :::div{.hint} Defining the `Certificate SNI hostname` field will result in an error when deploying to Tomcat 8 and below. ::: The `Default certificate` field can be used to indicate if the certificate being deployed will be the default for the connector. By selecting `Make this the default certificate`, this certificate will be used for any request to a hostname that does not have a certificate specifically mapped to it. Selecting `Leave this certificate's default status unchanged` will leave the existing default hostname unchanged. :::div{.hint} There must always be a default certificate. If the certificate being deployed is the only certificate available to the connector, it will be made the default even if `Make this the default certificate` is not selected. ::: ### Tomcat certificate options A number of optional settings around how the certificate is created are defined in the `Tomcat Certificate Options` section. These options differ depending on the SSL implementation that was selected. The JSSE SSL implementations of BIO, NIO and NIO2 rely on a Java KeyStore file. The APR implementation uses a certificate file and a PEM private key file. #### Java KeyStore options When no `Private key password` is defined, the Java KeyStore will have the default password of `changeit`. This is the default password specified by Tomcat. If a password is defined then that password will be used to secure the Java KeyStore and included in the Tomcat configuration. The `KeyStore filename` field can be used to define the location of the KeyStore created as part of the step. If left blank, the KeyStore file will be created with a unique filename in the `CATALINA_BASE/conf` directory, and the filename will be based on the certificate subject. If specified, a KeyStore will be created at the specified location, overwriting any existing file. Any value entered for the filename must be an absolute path. The `KeyStore alias` field defines the alias under which the certificate will be saved. If not defined, it will default to the alias of `octopus`. #### Certificate and PEM file options When no password is defined, PEM files created for the `APR` SSL implementation remain unencrypted, with a key file starting with `-----BEGIN RSA PRIVATE KEY-----`. When a password is defined, the key file is encrypted, starts with `-----BEGIN ENCRYPTED PRIVATE KEY-----`, and the password is included in the Tomcat configuration file. The `Private key filename` field is used to define the location of the private key PEM file. If left blank, the private key file will be created with a unique filename in the `CATALINA_BASE/conf` directory, and the filename will be based on the certificate subject. If specified, a private key file will be created at the specified location, overwriting any existing file. Any value entered for the filename must be an absolute path. The `Public key filename` field is used to define the location of the public certificate. If left blank, the public certificate file will be created with a unique filename in the `CATALINA_BASE/conf` directory, and the filename will be based on the certificate subject. If specified, a certificate file will be created at the specified location, overwriting any existing file. Any value entered for the filename must be an absolute path. ## Multiple certificate types In Tomcat 8.5 and above, [multiple certificates](https://octopus.com/blog/mixing-keys-in-tomcat) can be assigned to a single port. This is most useful for assigning a RSA and a ECDSA certificate, and allowing the client to select the most secure option. When exporting a certificate from Octopus to Tomcat 8.5+, the type of certificate is automatically determined, and multiple certificates of different types can be assigned to the same port. For example, if you had both an RSA and a ECDSA certificate managed by Octopus, and you had two `Deploy a certificate to Tomcat` steps that deployed each certificate to the same Tomcat 8.5+ instance and the same port, you would end up with a configuration that looks like this: ```xml ``` :::div{.hint} Although the example above uses the `APR` protocol, any protocol can be used to deploy multiple certificate types. ::: With this configuration, newer browsers would select the ECDSA certificate, while older browsers may fall back to the RSA certificate. ## Configuration file backups Before any change is made to the `server.xml` file, it is saved to the `octopus_backup.zip` archive. This archive can be used to restore previous versions of the `server.xml` file. # Passing parameters to scripts Source: https://octopus.com/docs/deployments/custom-scripts/passing-parameters-to-scripts.md Octopus can pass parameters to your custom script files for any of the supported scripting languages. This means you can use existing scripts, or write and test your own parameterized scripts that have no knowledge of Octopus, passing Octopus Variables directly to your scripts as parameters. The Octopus scripting API is still available within the context of your script, meaning you can use a mixture of parameters and other Octopus variables and functions. Consider this example PowerShell script: **PowerShell script using Octopus Variables** ```powershell $environment = $OctopusParameters["Octopus.Environment.Name"] Write-Host "Environment: $environment" ``` You can parameterize this script making it easier to test outside of Octopus: **PowerShell script using parameters** ```powershell param ( [Parameter(Mandatory=$True)] [string]$Environment ) Write-Host "Environment: $Environment" ``` When you call external scripts (sourced from a file inside a package) you can pass parameters to your script. This means you can write "vanilla" scripts that are unaware of Octopus, and test them in your local development environment. You can define your parameters in the **Script Parameters** field using the format expected by your scripting execution environment (see below for examples). :::figure ![Script Parameters](/docs/img/deployments/custom-scripts/images/script-parameters.png) ::: :::div{.hint} **Delimiting string values** Don't forget to correctly delimit your parameters correctly for the scripting engine. In the example above we have surrounded the parameter value in double-quotes to handle cases where the Environment Name has spaces: `"#{Octopus.Environment.Name}"` ::: ## Passing parameters to PowerShell scripts \{#passing-parameters-powershell} You can pass parameters to PowerShell scripts as if you were calling the script yourself from PowerShell, using positional or named parameters. **Script Parameters in Octopus** ```bash -Environment "#{Octopus.Environment.Name}" -StoragePath "#{MyApplication.Storage.Path}" ``` **Usage in PowerShell script** ```powershell Param ( [Parameter(Mandatory=$True)] [string]$Environment, [Parameter(Mandatory=$True)] [string]$StoragePath ) Write-Host "$Environment storage path: $StoragePath" ``` ## Passing parameters to C# scripts \{#passing-parameters-csharp} You can pass parameters to C# scripts [as described here for the dotnet-script engine](https://github.com/dotnet-script/dotnet-script#passing-arguments-to-scripts). **Script Parameters in Octopus** ```bash -- "#{Octopus.Environment.Name}" "#{MyApplication.Storage.Path}" ``` **Usage in C# script** ```csharp var environment = Args[0] var storagePath = Args[1] Console.WriteLine("{0} storage path: {1}", environment, storagePath); ``` ## Passing parameters to Bash scripts \{#passing-parameters-bash} You can pass parameters to Bash scripts [as described in Bash manual.](https://www.gnu.org/software/bash/manual/bash.html#Positional-Parameters) **Script Parameters in Octopus** ```powershell "#{Octopus.Environment.Name}" "#{MyApplication.Storage.Path}" ``` **Usage in Bash script** ```bash environment="$1" storagePath="$2" echo "$environment storage path: $storagePath" ``` ## Passing parameters to F# scripts \{#passing-parameters-fsharp} You can pass parameters to FSharp scripts [as described by the F# documentation.](https://docs.microsoft.com/en-us/dotnet/fsharp/tools/fsharp-interactive/#using-the-fsi-object-in-f-code) **Script Parameters in Octopus** ```powershell "#{Octopus.Environment.Name}" "#{MyApplication.Storage.Path}" ``` **Usage in F# script** ```fsharp let environment = fsi.CommandLineArgs.[1] let storagePath = fsi.CommandLineArgs.[2] printfn "$s storage path: $s" environment storagePath ``` ## Passing parameters to Python3 scripts \{#passing-parameters-python} You can pass parameters to python scripts [as described by the python documentation.](https://docs.python.org/3/tutorial/interpreter.html#argument-passing) **Script Parameters in Octopus** ```python '#{Octopus.Environment.Name}' '#{MyApplication.Storage.Path}' ``` **Usage in Python3 script** ```python environment=sys.argv[1] storagePath=sys.argv[2] print("Parameters {} {}".format(environment, storagePath)) ``` :::div{.hint} **Note:** If your python scripts make use of [argparse](https://docs.python.org/3/library/argparse.html), it's possible you might encounter an error at execution time, as Calamari bootstraps the execution of the python script as part of the deployment or runbook run. ::: # Elastic and transient environments Source: https://octopus.com/docs/deployments/patterns/elastic-and-transient-environments.md Elastic and transient environments is a group of features to facilitate deploying to machines that are intermittently available for deployment. ## Scenarios ### Auto-scaling infrastructure OctoFX has become so popular that additional servers are required to manage peak load. At peak times servers are provisioned and the latest version of OctoFX is deployed to those servers. When demand wanes the additional servers are terminated. ### Intermittent connectivity OctoFX is being deployed to trading desks in offices around the world. Occasionally, unknown to the deployment team, the machines that host OctoFX are taken down for maintenance. OctoFX must be kept up to date when a machine comes back online. ## Elastic and transient environment features - Automatically keep deployment targets up to date with the latest releases. - Automatically reflect infrastructure changes in your environments. - Deploy to environments where deployment target status may change during the deployment. ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Applying changes from Terraform templates Source: https://octopus.com/docs/deployments/terraform/apply-terraform-changes.md The Terraform [apply command](https://www.terraform.io/cli/commands/apply) is used to execute changes based on a [Terraform execution plan](/docs/deployments/terraform/plan-terraform). Octopus has two steps that execute plan information: - `Apply a Terraform template` and - `Destroy Terraform resources`. As their names suggest, the `Apply a Terraform template` step will execute the additions indicated by the execution plan, while the `Destroy Terraform resources` step will destroy the resources marked for removal by the execution plan. :::figure ![Octopus Steps](/docs/img/deployments/terraform/apply-terraform-changes/images/octopus-terraform-apply-step.png) ::: :::div{.warning} The plan steps do not support saving the plan to a file and applying that file at a later date. This means the plan information only makes sense when the same values are used in the plan and apply/destroy steps. Configuring shared variables for the step fields ensures that the same values will be used. ::: ## Step options The planning steps offer the [same base configuration as the other built-in Terraform steps](/docs/deployments/terraform/working-with-built-in-steps). You can refer to the documentation for those steps for more details on the options for the plan steps. ## Advanced options section You can optionally control how Terraform downloads plugins and where the plugins will be located in the `Advanced Options` section. - The `Terraform workspace` field can optionally be set to the desired workspace. If the workspace does not exist it will be created and selected, and if it does it exist it will be selected. - The `Terraform plugin cache directory` can be optional set to a directory where Terraform will look for existing plugins, and optionally download new plugins into. By default, this directory is not shared between targets, so additional plugins have to be downloaded by all targets. By setting this value to a shared location, the plugins can be downloaded once and shared amongst all targets. - The `Allow additional plugin downloads` option can be checked to allow Terraform to download missing plugins, and unchecked to prevent these downloads. - The `Custom terraform init parameters` option can be optionally set to include any parameters to pass to the `terraform init` action. - The `Custom terraform apply parameters` option can be optionally set to include any parameters to pass to the `terraform apply` action. ![Terraform Advanced Options](/docs/img/deployments/terraform/images/terraform-advanced.png) # Create and Deploy a Release Source: https://octopus.com/docs/getting-started/first-deployment/legacy-guide/2022/create-and-deploy-a-release.md [Getting Started - Create Release And Deployment](https://www.youtube.com/watch?v=syfl59pR4ZU) A release is a snapshot of the deployment process and the associated assets (packages, scripts, variables) as they existed when the release was created. Our hello world deployment process only has one step that executes the script we entered in the previous section. When you deploy the release, you execute the deployment process with all the associated details, as they existed when the release was created. 1. Click **CREATE RELEASE**. 1. The release is created and given a version number. There is a space to add release notes—click **SAVE**. 1. To deploy this version of the release, click **DEPLOY TO TEST...**. The next screen gives you the details of the release you are about to deploy: :::figure ![Deploy release screen in the Octopus Web Portal](/docs/img/getting-started/first-deployment/legacy-guide/images/deploy-release.png) ::: 4. To deploy the release, click **Deploy**. 5. The next screen displays a task summary. If you click the **TASK LOG**, you'll see the steps Octopus took to execute your hello world script. Because we didn't define any deployment targets for the **Test** environment, Octopus leased a [dynamic worker](/docs/infrastructure/workers/dynamic-worker-pools/#on-demand) (a machine that executes tasks on behalf of the Octopus Server) that was then used to execute the hello world script. If you are on a self-hosted instance of Octopus Deploy, you won't see that message. :::figure ![The results of the Hello world deployment](/docs/img/getting-started/first-deployment/legacy-guide/images/deployed-release.png) ::: You have finished your first deployment! But there is still a bit of work to do. The next step will [define and use variables](/docs/getting-started/first-deployment/define-and-use-variables) in the deployment process. **Further Reading** For further reading on creating releases in Octopus Deploy please see: - [Releases Documentation](/docs/releases) - [Deployment Documentation](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # Defining the runbook process for workers Source: https://octopus.com/docs/getting-started/first-runbook-run/define-the-runbook-process.md The runbook process is the steps the Octopus Server orchestrates to perform various tasks on your infrastructure. To first understand how runbook processes work, we will add a single step to run on the Octopus Server (if self-hosted) or on a worker (if Octopus Cloud). Future steps in this tutorial will configure additional steps to run on your servers. :::figure ![The Hello world deployment process](/docs/img/getting-started/first-runbook-run/images/runbook-process.png) ::: 1. From the *Hello Runbook* runbook you created on the previous page, click **DEFINE YOUR RUNBOOK PROCESS**. 1. Click **ADD STEP**. 1. Select the **Script** tile to filter the types of steps. 1. Scroll down and click **ADD** on the **Run a Script** tile. 1. Accept the default name for the script and leave the **Enabled** check-box ticked. 1. In the **Execution Location** section, select **Run once on a worker** (if you are on self-hosted Octopus, select **Run once on the Octopus Server**). If you are using Octopus Cloud and want to use Bash scripts change the worker pool from **Default Worker Pool** to **Hosted Ubuntu**. 1. Scroll down to the **Script**, select your script language of choice, and enter the following script in the **Inline Source Code** section:
PowerShell ```powershell Write-Host "Hello, World!" ```
Bash ```bash echo "Hello, World!" ```
:::div{.hint} If you are using Octopus Cloud, Bash scripts require you to select the **Hosted Ubuntu** worker pool. The **Default Worker Pool** is running Windows and doesn't have Bash installed. ::: 1. Click **SAVE**. The next step will [run the runbook](/docs/getting-started/first-runbook-run/running-a-runbook). **Further Reading** For further reading on runbook processes and what is possible please see: - [Runbook Examples](/docs/runbooks/runbook-examples) - [Runbook Documentation](/docs/runbooks) # Managing Octopus subscriptions Source: https://octopus.com/docs/getting-started/managing-octopus-subscriptions.md Control Center is where you manage your Octopus subscriptions and their associated user access. There are two types of Octopus subscriptions: 1. **Cloud instances**: Deployments-as-a-service 2. **Server licenses**: Octopus on your infrastructure ## Billing ### Upgrade a trial to a paid subscription Cloud instance: 1. Navigate to your Cloud instance in [Control Center](https://billing.octopus.com/). 2. Click **Upgrade Plan**. 3. Choose your plan and complete the purchase through our checkout. Server license: 1. Navigate to your Server License in [Control Center](https://billing.octopus.com/). 2. Click **Upgrade Plan**. 3. Choose your plan and complete the purchase through our checkout. ### Pay an invoice We will send you a unique payment link to pay your invoice through Zuora's secure payment page. Please be cautious of phishing attempts. You can verify the payment page's authenticity by checking it displays the correct invoice number. ### Update billing information Please [contact sales](https://octopus.com/company/contact) to update your billing information. ### View orders 1. Navigate to your subscription in [Control Center](https://billing.octopus.com/). 2. Click **Billing** in the left sidebar. 3. Click **Contact Sales**. 4. Complete the form and we'll get back to you with the order details. ### Change your plan To modify your plan: 1. Navigate to your subscription in [Control Center](https://billing.octopus.com/). 2. Click **Billing** in the left sidebar. 3. Click **Get in Touch** under the change plan section. 4. A contact sales dialog will appear for you to request changes to your plan. ### Cancel your plan To cancel your plan: 1. Navigate to your subscription in [Control Center](https://billing.octopus.com/). 2. Click **Billing** in the left sidebar. 3. Click **Get in Touch** under the change plan section. 4. A contact sales dialog will appear for you to cancel your plan. ## Configuration ### Change outage window (Cloud only) To keep Octopus Cloud running smoothly, we use outage windows to perform updates. To minimize disruptions to your deployments, please pick a two-hour [maintenance window](/docs/octopus-cloud/maintenance-window) outside of your regular business hours. 1. Navigate to your subscription in [Control Center](https://billing.octopus.com/). 2. Click **Configuration** in the left sidebar. 3. Click **Change Window**. 4. Specify the start and end times. 5. Click **Submit**. ### Change instance URL (Cloud only) 1. Navigate to your subscription in [Control Center](https://billing.octopus.com/). 2. Click **Configuration** in the left sidebar. 3. Click **Change URL**. 4. Specify the new URL. 5. Click **Submit**. ## Manage user access ### Access levels in Control Center There are two access levels in Control Center: - **Subscription Group access**: manage a subscription group and access all current and future subscriptions in the group. - **Direct Subscription access**: access a specific subscription. ### Subscription Group access #### Invite a user to Subscription Group access Invite a user to manage a subscription group and access all current and future subscriptions in the group. 1. In the [Control Center](https://billing.octopus.com/) dashboard, locate your subscription group. 2. Click **User Access**. 3. Click **Invite User**. 4. Enter the user’s details. 5. Select which role to give the user ([see role permissions below](#role-permissions-for-subscription-group-access)). 6. Click **Invite**. :::figure ![Invite users to a subscription group in Control Center](/docs/img/getting-started/managing-octopus-subscriptions/images/subscription-group-access.png) ::: #### Email invitation The invited user will receive an email invitation. If they already have an [Octopus ID](/docs/security/authentication/octopusid-authentication) (Octopus Deploy account), they just need to click **Accept invite** to gain access to the subscription group and then click **Sign in** to view it. Otherwise, they will first need to **Register** a new account using the email address the invitation was sent to. #### Role permissions for Subscription Group access ##### Group-level | | Administrator | Technical Manager | Billing Manager | | ------------ | ------------------------ | ------------------------------ | ----------------------- | | **Control Center**
(billing.octopus.com)
| Rename/Delete Group
Manage User Access
| Rename/Delete Group
Manage User Access
| - | ##### Subscription-level
Cloud | | Administrator | Technical Manager | Billing Manager | | ------------ | ------------------------ | ------------------------------ | ----------------------- | | **Control Center**
(billing.octopus.com)
| View Overview
Manage Billing
Manage Configuration
Manage User Access
| View Overview
Manage Configuration
Manage User Access | View Overview
Manage Billing
| | **Octopus Instance**
(example.octopus.com)
| “Octopus Managers" team | “Space Managers” team | - | :::div{.hint} Octopus uses teams and user roles to manage permissions. The “Octopus Managers” and “Space Managers” teams provide different levels of access in your instance. Learn about best practices for [users, roles, and teams](/docs/best-practices/octopus-administration/users-roles-and-teams). :::
Server | | Administrator | Technical Manager | Billing Manager | | ------------ | ------------------------ | ------------------------------ | ----------------------- | | **Control Center**
(billing.octopus.com)
| View License Key
Manage Billing
Manage User Access | View License Key
Manage User Access
| View License Key
Manage Billing
|
### Direct Subscription access #### Invite a user to Direct Subscription access Invite a user to access a specific subscription. ##### Cloud 1. Navigate to your Cloud instance in [Control Center](https://billing.octopus.com/). 2. Click **User Access** in the left sidebar. 3. Click **Invite User**. 4. Enter the user’s details. 5. Select which role to give the user ([see role permissions below](#role-permissions-for-direct-access)). 6. Click **Invite**. ##### Server 1. Navigate to your Server license in [Control Center](https://billing.octopus.com/). 2. Click **Admin Access** in the left sidebar. 3. Click **Invite Admin**. 4. Enter the user’s details. 5. Select which role to give the user ([see role permissions below](#role-permissions-for-direct-access)). 6. Click **Invite**. :::figure ![Invite users to a specific subscription in Control Center](/docs/img/getting-started/managing-octopus-subscriptions/images/direct-access.png) ::: #### Email invitation \{#email-invitation} The invited user will receive an email invitation. If they already have an [Octopus ID](/docs/security/authentication/octopusid-authentication) (Octopus Deploy account), they just need to click **Accept invite** in the email to gain access to the subscription and then click **Sign in** to view the Octopus instance. Otherwise, they will first need to **Register** a new account using the email address the invitation was sent to. :::div{.hint} **Cloud instances note:** Invited users are only added to an Octopus Cloud instance after their first sign-in. To manage a newly invited user’s permissions, you will need to ask them to sign in to your Octopus Cloud instance first. ::: #### Role permissions for Direct access
Cloud | | Cloud Subscription Owner | Cloud Subscription User (Contributor) | Cloud Subscription User (Base) | | --------------------------- | ---------------------------------------- | ------------------------------------------------ | ------------------------------------------------ | | **Control Center**
(billing.octopus.com)
| View Overview
Manage Billing
Manage Configuration
Manage User Access
| View Overview | View Overview | | **Octopus Instance**
(example.octopus.com)
| “Octopus Managers” team
By default, the user has full permissions across all spaces.
| “Space Managers” team
By default, the user has full permissions in the “Default” space only.
If you delete the “Default” space, the user will be added to the “Everyone” team.
| “Everyone” team
By default, the user can sign in but can't view or do anything.
| :::div{.hint} Octopus uses teams and user roles to manage permissions. The “Octopus Managers”, “Space Managers”, and “Everyone” teams provide different levels of access in your instance. Learn about best practices for [users, roles, and teams](/docs/best-practices/octopus-administration/users-roles-and-teams). :::
Server | | Server License Owner | Server License Viewer | | ------------ | ------------------------------ | ------------------------------| | **Control Center**
(billing.octopus.com)
| View License Key
Manage Billing
Manage User Access | View License Key |
### Change a user's role To change a user's role, you must remove that user's access and then re-invite them to the role you want them to have. ### Delete a user Deleting Subscription Group access users: 1. Navigate to the dashboard and locate your subscription group. 2. Click **User Access**. 3. Locate the user in the table and click the trash icon. 4. Click **Delete** in the confirmation dialog. Deleting Direct access users: 1. Navigate to your subscription. 2. Click **User Access** in the left sidebar. 3. Locate the user in the table and click the trash icon. 4. Click **Delete** in the confirmation dialog. ## Help and support The question mark icon in the top right of the Control Center provides a menu of helpful links to the docs, to contact support, and to upload support files. ## FAQ ### Locating subscriptions Most subscriptions (Cloud instances and Server licenses) are accessible from the dashboard of Control Center. Some legacy subscriptions are only accessible from the legacy Control Center V1. If you need help please contact our [support team](https://octopus.com/support). ### What is Control Center V1? [Control Center V1](https://octopus.com/control-center) is our legacy system where legacy subscriptions are managed. You should only access Control Center V1 if you need to manage a legacy subscription. # SSH key pair account Source: https://octopus.com/docs/infrastructure/accounts/ssh-key-pair.md An SSH key pair account is one of the more secure authentication methods available for connections to [SSH Targets](/docs/infrastructure/deployment-targets/linux/ssh-target). ## Creating an SSH key pair Before you can configure the SSH key pair account in Octopus, you need to generate public and private keys. This can be done on either the [Linux target](#generate-key-pair-linux) or the [Octopus Server](#generate-key-pair-windows). ### Generating a key pair on Linux {#generate-key-pair-linux} :::div{.hint} From **Octopus 2021.1.7466**, Octopus supports newer ED25519 SSH keys. For older versions, and legacy compatibility, please follow the RSA instructions. ::: 1. Run the following command on your Linux server:
ED25519 ```bash ssh-keygen -t ed25519 ```
RSA ```bash ssh-keygen -t rsa -m PEM ```
This will bring up an interactive dialog, prompting for: 2. The folder that the generated will be placed, defaulting to `~/.ssh/id_ed25519` or `~/.ssh/id_rsa`, depending on your selection above. 3. Enter a passphrase (or press enter for no passphrase). 4. If you entered a passphrase, re-enter the passphrase. You now have two files: - `id_ed25519` or `id_rsa` (the private key) - `id_ed25519.pub` or `id_rsa.pub` (the public key) The public key will be stored on this (the Linux) server and the private key will be copied to the Octopus Server. 5. Copy the public key to the `authorized_keys` file that is used during authentication:
ED25519 ```bash ED25519 cat ~/.ssh/id_ed25519.pub >> ~/.ssh/authorized_keys ```
RSA ```bash RSA cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ```
6. Modify the permissions of the `authorized_keys` file: ```bash chmod 600 ~/.ssh/authorized_keys ``` 7. Copy the private key to the machine your Octopus Server is installed on. Proceed to [creating the SSH key pair account](#create-ssh-account). If you need more information about generating an SSH key pair, see the [useful links section](#useful-links). ### Generating a key pair on Windows {#generate-key-pair-windows} The easiest way to generate valid keys on Windows is to use a tool like[ PuTTYgen](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). Start by clicking "Generate" and wait for the tool to finish creating the random key pair. :::figure ![](/docs/img/infrastructure/accounts/ssh-key-create-putty.png) ::: Provide your passphrase if desired and export the private key to the accepted format by going to **Conversions ➜ Export OpenSSH Key**. Clicking "Save private key" will actually produce a file that, while it can be used by this tool again, is not compatible with the standard SSH process. To get the public key over to the server you can either click "Save public key", copy the file across to the server and add the key to `~/.ssh/authorized_keys` as outlined above, or just cut+paste the content from the textbox directly into the remote file. If you need more information about generating an SSH key pair, see the [useful links section](#useful-links). ## Creating the SSH key pair account {#create-ssh-account} 1. Navigate to **Deploy ➜ Manage ➜ Accounts** and click **ADD ACCOUNT**. 1. Select **SSH key pair** from the drop-down menu. 1. Give the account a name so you can easily identify it when you need to use the account. 1. Add a description. 1. Enter the username you will use to access the remote host. 1. Upload the private key to the Octopus Server. 1. Enter the passphrase for the private key if you created one. 1. If you want to restrict which environments can use the account, select only the environments that are allowed to account. If you don't select any environments, all environments will be allowed to use the account. 1. Click **SAVE**. The account is now ready to be used when you configure your [SSH deployment target](/docs/infrastructure/deployment-targets/linux/ssh-target). The server will confirm that this private key matches its public key at the start of each SSH connection. If you are storing the private key on disk it is recommended, but not mandatory, that you encrypt the key. ## Useful links {#useful-links} Due to the number and configurable nature of the various Linux distributions available, there are other dedicated sites that can provide more precise information & tutorials for your specific use case. - [PuTTY download page](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) has several useful Windows tools. - [ssh-keygen man page](https://linux.die.net/man/1/ssh-keygen). - [sshd\_config man page (Ubuntu)](http://manpages.ubuntu.com/manpages/hirsute/en/man5/sshd_config.5.html). - Great intro SSH keygen articles from [DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2), [GitHub](https://help.github.com/articles/connecting-to-github-with-ssh/) or [Atlassian](https://confluence.atlassian.com/display/STASH/Creating+SSH+keys). ## Learn more - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # Amazon ECS cluster Source: https://octopus.com/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target.md ECS Cluster targets are used by the [ECS steps](/docs/deployments/aws) to define the context in which deployments and scripts are run. :::div{.hint} Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html) for detailed instructions on how to provision a new ECS cluster. ::: :::div{.hint} From **Octopus 2022.2**, ECS Cluster targets can be discovered using tags on your cluster resource. ::: ## Discovering ECS cluster targets Octopus can discover ECS cluster targets as part of your deployment using tags on your resource. :::div{.hint} From **Octopus 2022.3**, you can configure the well-known variables used to discover ECS Cluster targets when editing your deployment process in the Web Portal. See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. ::: To discover targets use the following steps: - Add an AWS account variable named **Octopus.Aws.Account** to your project, or configure your worker with credentials that will allow your target to be discovered. See [AWS discovery configuration](/docs/infrastructure/deployment-targets/cloud-target-discovery/#aws) for more information on how to configure target discovery for AWS. - [Add tags](/docs/infrastructure/deployment-targets/cloud-target-discovery/#tag-cloud-resources) to your ECS cluster so that Octopus can match it to your deployment step and environment. - Add a `Deploy Amazon ECS Service` or `Update Amazon ECS Service` step to your deployment process. During deployment, the target tag on the step will be used along with the environment being deployed to, to discover cluster targets to deploy to. See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. ## Creating an ECS cluster target 1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **ADD DEPLOYMENT TARGET**. 2. Select **AWS** and click **ADD** on the Amazon ECS Cluster target type. 3. Enter a display name for the Amazon ECS Cluster. 4. Select at least one [environment](/docs/infrastructure/environments) for the target. 5. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. 6. In the **Authentication** section (see [Authentication](#authentication) below for more information): - Select whether to use an AWS account configured in Octopus or to use credentials from the worker on which your deployment runs. - Select an AWS account if necessary. If you don't have an `AWS Account` defined yet, check our [documentation on how to set one up](/docs/infrastructure/accounts/aws). - Select whether to assume an IAM role during authentication. 7. In the **ECS Cluster** section: - Enter the AWS region where the ECS cluster is running in AWS. - Enter a cluster name that matches the cluster name running in your AWS region. ![ECS Cluster Deployment Target Settings](/docs/img/infrastructure/deployment-targets/images/aws-ecs-target-cluster.png) ### Authentication There are multiple authentication options supported for ECS clusters. #### Worker credentials Authentication can be configured to use credentials from the worker on which a deployment or cluster health check runs. AWS supports sourcing these credentials in several different ways, including environment variables and EC2 instance roles. See [Setting credentials in node.js](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html) for more information on the different ways credentials can be provided. To configure the ECS cluster to use worker credentials select the "Use credentials provided on the worker" option in the Credentials field. :::figure ![ECS Cluster Worker Credentials](/docs/img/infrastructure/deployment-targets/images/aws-ecs-target-worker-credentials.png) ::: #### AWS Account Authentication can be configured to use an [AWS Account](/docs/infrastructure/accounts/aws). To configure your ECS cluster to use an account select the "Use account" option in the Credentials field. :::figure ![ECS Cluster Account Credentials](/docs/img/infrastructure/deployment-targets/images/aws-ecs-target-account-credentials.png) ::: ### Assuming an IAM role AWS supports assuming a specific role when interacting with services, allowing you to configure granular permissions for a given operation. See [Using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) for more information on using and assuming roles. To configure the ECS cluster to use an assumed role select the "Assume role" option in the Assume IAM role field. When assuming a role there are a number of options which can be configured. | Field | Description | Required | | ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | | Role ARN | The ARN of the role to be assumed | Y | | Session Name | The name of the session to use when assuming the role. If this is not provided a default session will be automatically generated. | N | | Session Duration | The duration that the session will be available for. If this is not provided the default session duration for the role will be used. | N | | External ID | An external ID which can be provided to authorize third-party access. See the [AWS documentation on External Id](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) for more information | N | ![ECS Cluster Assume Role](/docs/img/infrastructure/deployment-targets/images/aws-ecs-target-assume-role.png) # Azure targets Source: https://octopus.com/docs/infrastructure/deployment-targets/azure.md Octopus models your platform-as-a-service endpoints as deployment targets. Read more PaaS targets [blog: PaaS deployment targets](https://octopus.com/blog/paas-targets). Octopus's Azure targets provide a reference to actual targets in your Azure infrastructure, allowing you to target several PaaS products by [target tag](/docs/infrastructure/deployment-targets/target-tags) during a deployment. Azure targets are added the same way as regular deployment targets and go through health checks, so you know the status of your Azure infrastructure targets and can spot any problems. The currently supported Azure targets are: - [Azure Service Fabric Clusters](/docs/infrastructure/deployment-targets/azure/service-fabric-cluster-targets). - [Azure Web Apps](/docs/infrastructure/deployment-targets/azure/web-app-targets) (also works for Azure Functions). - Azure Kubernetes Service via the [Kubernetes Agent](/docs/kubernetes/targets/kubernetes-agent) and [Kubernetes API](/docs/kubernetes/targets/kubernetes-api) deployment targets. - Azure VM via [Tentacle using Desired State Configuration (DSC)](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc). :::div{.warning} Azure Cloud Services are no longer supported in Octopus Deploy as of `2025.1`. Microsoft has deprecated these Azure services, and as of October 1st 2024 shut down existing Cloud Service deployments. ([Source](https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-choose-me)) ::: # Create Kubernetes Target Command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/kubernetes-target.md ## Kubernetes Command: **_New-OctopusKubernetesTarget_** | Parameter | Cloud Provider | Value | |------------------------------------------|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | `-name` | | Name for the Octopus deployment target. | | `-clusterUrl` | | The Kubernetes cluster URL. This must be a complete URL such as `https://mycluster.org`. | | `-octopusServerCertificateIdOrName` | | The name of the Octopus certificate to use as the cluster CA. | | `-octopusRoles` | | Comma separated list of [target tags](/docs/infrastructure/deployment-targets/target-tags) to assign. | | `-octopusAccountIdOrName` | Azure, AWS, GCE | The name of the Octopus account used for authentication with the cluster. This or the `-octopusClientCertificateIdOrName` option must be defined. | | `-octopusClientCertificateIdOrName` | | The name of the Octopus certificate used for authentication with the cluster. This or the `-octopusAccountIdOrName` option must be defined. | | `-clusterResourceGroup` | | When using an Azure account, this defines the name of the resource group that holds the AKS cluster. | | `-clusterAdminLogin` | Azure | Set to `$True` when building an AKS target to use the admin login. | | `-clusterName` | Azure, AWS | When using a AWS or Azure account, this defines the name of the EKS or AKS cluster. | | `-namespace` | | The default kubectl namespace. | | `-updateIfExisting` | | Will update an existing Kubernetes target with the same name, create if it doesn't exist. | | `-skipTlsVerification` | | The server's certificate will not be checked for validity. This will make your HTTPS connections insecure. | | `-octopusDefaultWorkerPoolIdOrName` | | Name or Id of the Worker Pool for the deployment target to use. (Optional). Added in 2020.6. | | `-healthCheckContainerImageFeedIdOrName` | | Name or Id of the feed that contains the health check container image. Added in 2021.2. | | `-healthCheckContainerImage` | | The name of the health check container image. Added in 2021.2. | | `-clusterProject` | GCE | The ID of the GCE project containing the GKE cluster to connect to. | | `-clusterRegion` | GCE | The name of the GKE cluster region (for regional clusters). | | `-clusterZone` | GCE | The name of the GKE cluster zone (for zonal clusters). | | `-clusterImpersonateServiceAccount` | GCE | Set to `$True` to impersonate service accounts when defining a GKE cluster. | | `-clusterServiceAccountEmails` | GCE | Defines the service account emails to assume when defining a GKE cluster. | | `-clusterUseVmServiceAccount` | GCE | Set to `$True` to use the service account assigned to the virtual machine hosting the GKE target worker. | | `-awsUseWorkerCredentials` | AWS | Will create a Kubernetes Target configured to authenticate to AWS using Worker Credentials. `-octopusAccountIdOrName` option must _not_ be defined. | | `-awsAssumeRoleArn` | AWS | Adds an IAM Role to AWS Credentials. Can only be used with an AWS Account in `-octopusAccountIdOrName` or with `-awsUseWorkerCredentials`. | | `-awsAssumeRoleSession` | AWS | Adds a Session Name to the IAM Role configuration. Can only be used when `-awsAssumeRoleArn` is used. | | `-awsAssumeRoleSessionDurationSeconds` | AWS | Adds a Session Duration in Seconds to the IAM Role Configuration. Can only be used when `-awsAssumeRoleArn` is used. | | `-awsAssumeRoleExternalId` | AWS | Adds an External Id to the IAM Role Configuration. Can only be used when `-awsAssumeRoleArn` is used. | ### Examples Create a target with a username/password or token account. ``` New-OctopusKubernetesTarget ` -name "The name of the target" ` -clusterUrl "https://k8scluster" ` -octopusRoles "The target tag" ` -octopusAccountIdOrName "The name of an account" ` -namespace "kubernetes-namespace" ` -updateIfExisting ` -skipTlsVerification True ``` When creating a target with a client certificate, the name of the certificate is required. ``` New-OctopusKubernetesTarget ` -name "The name of the target" ` -clusterUrl "https://k8scluster" ` -octopusRoles "The target tag" ` -octopusClientCertificateIdOrName "The name of a certificate" ` -namespace "kubernetes-namespace" ` -updateIfExisting ` -skipTlsVerification True ``` When creating a target using an Azure account, the cluster URL and certificates are not required. The Azure resource group and AKS name are required. ``` New-OctopusKubernetesTarget ` -name "The name of the target" ` -octopusRoles "The target tag" ` -octopusAccountIdOrName "The name of an azure account" ` -clusterResourceGroup "AzureResourceGroupName" ` -clusterName "AzureAKSClusterName" ` -namespace "kubernetes-namespace" ` -updateIfExisting ` -skipTlsVerification True ``` When creating a target using an AWS account with optional IAM Role, the EKS cluster name is required. _Note:_ When using an IAM Role, Session, Session Duration and External ID are not required if the default is preferred. ``` New-OctopusKubernetesTarget ` -name "The name of the target" ` -octopusRoles "The target tag" ` -clusterUrl "https://k8scluster" ` -octopusAccountIdOrName "The name of an aws account" ` -clusterName "AwsEKSClusterName" ` -namespace "kubernetes-namespace" ` -updateIfExisting ` -skipTlsVerification True ` -awsAssumeRoleArn "MyIamRoleArnHere"` -awsAssumeRoleSession "MySessionNameHere"` -awsAssumeRoleSessionDurationSeconds 1200` -awsAssumeRoleExternalId "MyExternalIdHere" ``` When creating a target using AWS Worker Credentials, use the `-awsUseWorkerCredentials` option. The IAM Role options in the example above can also be used. _Note:_ In this case, no `-octopusAccountIdOrName` is required. ``` New-OctopusKubernetesTarget ` -name "The name of the target" ` -octopusRoles "The target tag" ` -clusterUrl "https://k8scluster" ` -clusterName "AwsEKSClusterName" ` -namespace "kubernetes-namespace" ` -updateIfExisting ` -skipTlsVerification True ` -awsUseWorkerCredentials ``` When creating a GKE target, the GCE project, region or zone, and cluster names are required: ``` New-OctopusKubernetesTarget ` -name dynamicGKE ` -octopusRoles gke ` -environment Development ` -octopusAccountIdOrName Google ` -clusterProject kubernetes-demo-198002 ` -clusterRegion australia-southeast1 ` -clusterName mattc-test ` -updateIfExisting ``` :::div{.hint} If your process creates dynamic deployment targets from a script, and then deploys to those targets in a subsequent step, make sure you add a full [health check](/docs/projects/built-in-step-templates/health-check) step for the role of the newly created targets after the step that creates and registers the targets. This allows Octopus to ensure the new targets are ready for deployment by staging packages required by subsequent steps that perform the deployment. ::: # Sudo commands Source: https://octopus.com/docs/infrastructure/deployment-targets/linux/sudo-commands.md By default, most distros will require the user to provide a password when executing a command with the security privileges of another user. This behavior takes place typically when a user wishes to execute some command as the superuser or root by using the `sudo` command. The scripts run by Octopus Deploy run in the background with no opportunity for a password prompt; so executing an innocuous sudo command such as: ```bash sudo echo "I HAVE THE POWER" ``` can result in the script failing with `exit code 1`, and the message to stderr: ```bash sudo: no tty present and no askpass program specified ``` in Ubuntu, and in Red Hat: ```bash sudo: sorry you must have a tty to run sudo ``` ## Enabling sudo command The recommended way to enable these commands to be run is to disable the password prompt for the user account used for deployments. ### Disable password prompt Running the following command (from a shell with interactive mode so you can enter any required passwords) adds a file that is read in conjunction with the sudoers file to configure valid sudo policies. ```bash sudo visudo -f /etc/sudoers.d/octopus ``` Add the following line to this file, substituting `` with the appropriate user used by the Octopus Deploy deployment target or worker: ```bash ALL=(ALL) NOPASSWD:ALL ``` Further information regarding how this file is used and how to make the configuration more precise can be found at the following links. - [visudo manual](http://www.sudo.ws/man/1.8.13/visudo.man.html) - [sudoers manual](http://www.sudo.ws/man/1.8.13/sudoers.man.html) - [simple configuration explanation](http://superuser.com/questions/357467/what-do-the-alls-in-the-line-admin-all-all-all-in-ubuntus-etc-sudoers#357472) If you are using a distro such as Ubuntu, you should now be able to use the sudo command throughout your scripts. ### Disable RequireTTY Although the sudo may no longer require a password, some distros, such as Centos and its derivatives, are configured by default to still require interactive input, or tty, when running sudo. To disable this, edit your `/etc/sudoers` file and change the line ```bash Defaults: requiretty ``` to ```bash Defaults: !requiretty ``` Alternatively you can make this configuration more precise by targeting specific users or groups as outlined at [How to disable requiretty for a single command in sudoers](http://unix.stackexchange.com/questions/79960/how-to-disable-requiretty-for-a-single-command-in-sudoers). (By default, the Ubuntu does not contain this configuration and this modification should not be required) :::div{.problem} **Be Selective with Permissions** Ideally your Octopus Deploy SSH endpoint should be configured with a special user solely for the purposes of running deployments. In this case you should consider configuring just that user's sudo capabilities to be limited to those commands needed to execute the deployment scripts. ::: ## Different Distributions use Different Conventions While the above instructions should work on common platforms like Ubuntu or RedHat, you may need to double-check the details for specific instructions relating to SSH authentication on target operating system. There are many Linux based distributions, some of which have their own unique way of doing things. For this reason, we cannot guarantee that these instructions will work in every case. ## Learn more - [Linux blog posts](https://octopus.com/blog/tag/linux/1) # Octopus Tentacle in a Container Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/octopus-tentacle-container.md Running an Octopus Tentacle inside a container may be preferable in some environments where installing one directly on the host is not an option. Octopus publishes both `windows/amd64` and `linux/amd64` Docker images for Tentacle and they are available on [DockerHub](https://hub.docker.com/r/octopusdeploy/tentacle). The Octopus Tentacle Docker image can be run in either [polling](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) or [listening](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommend) mode. :::div{.info} Tentacles set up this way will run *inside a container* and script execution will not happen on the host itself. For this reason, Octopus Tentacles inside a container may not be appropriate for many deployment tasks. ::: When an Octopus Tentacle container starts up, it will attempt to invoke the [`register-with`](/docs/octopus-rest-api/tentacle.exe-command-line/register-with/) command to connect and add itself as a machine to that server with the provided [target tags](/docs/infrastructure/deployment-targets/target-tags) and environments. This registration will occur on every startup and you may end up with multiple instances if you stop/start a container. Our goal is to update this image to de-register the Tentacle when the container `SIGKILL` signal is passed in. In the meantime you may want to use [machine policies](/docs/infrastructure/deployment-targets/machine-policies) to remove the duplicated targets.
Deployment Target ```powershell docker run --interactive --detach ` --name OctopusTentacle ` --publish 10933:10933 ` --env ACCEPT_EULA="Y" ` --env ListeningPort="10933" ` --env ServerApiKey="API-XXXXXXXX" ` --env TargetEnvironment="Development" ` --env TargetRole="container-server" ` --env ServerUrl="http://10.0.0.1:8080" ` octopusdeploy/tentacle ```
Worker ```powershell docker run --interactive --detach ` --name OctopusWorker ` --publish 10933:10933 ` --env ACCEPT_EULA="Y" ` --env ListeningPort="10933" ` --env ServerApiKey="API-XXXXXXXX" ` --env TargetWorkerPool="LinuxWorkers" ` --env ServerUrl="http://10.0.0.1:8080" ` octopusdeploy/tentacle ```
## Configuration When running an Octopus Tentacle Image, the following values can be provided to configure the running Octopus Tentacle instance. ### Environment Variables Read Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) about setting environment variables. | Name | | | ------------- | ------- | |**DISABLE_DIND**|Setting `DISABLE_DIND` to `Y` will disable Docker-in-Docker (used for [execution containers for workers](/docs/projects/steps/execution-containers-for-workers)) when the container is run. **Note:** This requires the image to be launched with privileged permissions. See [this section](#using-execution-containers-dind) for more information| |**ServerApiKey**|The API Key of the Octopus Server the Tentacle should register with| |**ServerUsername**|If not using an API key, the user to use when registering the Tentacle with the Octopus Server| |**ServerPassword**|If not using an API key, the password to use when registering the Tentacle| |**ServerUrl**|The Url of the Octopus Server the Tentacle should register with| |**Space**|The name of the space which the Tentacle will be added to. Defaults to the default space| |**TargetEnvironment**|Comma delimited list of environments to add this target to| |**TargetRole**|Comma delimited list of [target tags](/docs/infrastructure/deployment-targets/target-tags) to add to this target| |**TargetWorkerPool**|Comma delimited list of worker pools to add to this target to (not to be used with the environment or target tag variables).| |**TargetName**|Optional Target name, defaults to container generated host name| |**TargetTenant**|Comma delimited list of tenants to add to this target| |**TargetTenantTag**|Comma delimited list of tenant tags to add to this target| |**TargetTenantedDeploymentParticipation**|The tenanted deployment mode of the target. Allowed values are `Untenanted`, `TenantedOrUntenanted`, and `Tenanted`. Defaults to `Untenanted`| |**MachinePolicy**|The name of the machine policy that will apply to this Tentacle. Defaults to the default machine policy| |**ServerCommsAddress**|The URL of the Octopus Server that the Tentacle will poll for work. Defaults to `ServerUrl`. Implies a polling Tentacle| |**ServerPort**|The port on the Octopus Server that the Tentacle will poll for work. Defaults to `10943`. Implies a Polling Tentacle| |**ListeningPort**|The port that the Octopus Server will connect back to the Tentacle with. Defaults to `10933`. Implies a listening Tentacle| |**PublicHostNameConfiguration**|How the url that the Octopus Server will use to communicate with the Tentacle is determined. Can be `PublicIp`, `FQDN`, `ComputerName` or `Custom`. Defaults to `PublicIp`| |**CustomPublicHostName**|If PublicHostNameConfiguration is set to `Custom`, the host name that the Octopus Server should use to communicate with the Tentacle| ### Exposed Container Ports Read the [Docker docs](https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose) about exposing ports. :::div{.warning} **Listening Port Breaking Change:** On Linux containers, prior to version `6.1.1271` the internal listening port was set by the `ListeningPort` environment variable. Any containers which previously exposed Tentacle on a port other than `10933` will need to have their port configuration updated if updating to a version `>=6.1.1271`. For example if the container was run with `-p 10934:10934` this should be updated to `-p 10934:10933`. ::: | Name | | | ------------- | ------- | |**10933**|Port Tentacle will be listening on (if in listening mode)| ### Volume Mounts Read the Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only) about mounting volume. | Name | | | ------------- | ------- | |**C:\Applications**|Default directory to deploy applications to| ### Using execution containers for Workers {#using-execution-containers-dind} By default, Docker containers are "unprivileged" and cannot run a Docker daemon inside a Docker container. Unless disabled, the Octopus Tentacle image attempts to run Docker-in-Docker to support [execution containers for workers](/docs/projects/steps/execution-containers-for-workers). This requires the image to be launched with [privileged permissions](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities): ```bash docker run --privileged ``` If you plan to host Octopus Tentacle in Kubernetes, you should set the `privileged` flag to `true` in the `containers` YAML section: ```yaml containers: - name: octopus_tentacle image: octopusdeploy/tentacle securityContext: privileged: true ``` :::div{.hint} Setting the environment variable `DISABLE_DIND` to `Y` prevents Docker-in-Docker from being run when the container is booted, and will prevent the execution containers feature working successfully. ::: ## Learn more - [Docker blog posts](https://octopus.com/blog/tag/docker/1) # Permissions required for the Tentacle Windows Service Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/tentacle-permissions.md By default, the Tentacle Windows Service runs under the Local System context. You can configure Tentacle to run under a different user account by modifying the service properties via the Services MMC snap-in (**services.msc**). The account that you use requires, at a minimum: - **Log on as a service** right on the current machine - [learn more](https://technet.microsoft.com/en-us/library/dn221981(v=ws.11).aspx). - Rights to enumerate the `Local Machine` certificate store. - Permissions to load the private key of the Tentacle X.509 certificate from the `Local Machine` certificate store. - Read/Write permissions to the Tentacle "Home directory" that you selected when Tentacle was installed (typically, **C:\Octopus**). - Rights to manage Windows Services (start/stop) - [learn more](https://social.technet.microsoft.com/wiki/contents/articles/5752.how-to-grant-users-rights-to-manage-services-start-stop-etc.aspx). Please be aware that to perform automatic Tentacle updates you need an account with [extra permissions](/docs/infrastructure/deployment-targets/machine-policies/#tentacle-update-account). In addition, since you are probably using Tentacle to install software, you'll need to make sure that the service account has permissions to actually install your software. This totally depends on your applications, but it might mean: - Permissions to modify IIS (C:\Windows\system32\inetsrv). - Permissions to connect a SQL Server Database. :::div{.problem} If you **Reinstall** a Tentacle using the Tentacle Manager, the Windows Service account will revert to Local System. ::: ## Using a Managed Service Account (MSA) You can run Tentacle using a Managed Service Account (MSA): 1. Install the Tentacle and make sure it is running correctly using one of the built-in Windows Service accounts or a Custom Account. 2. Reconfigure the `Tentacle` Windows Service to use the MSA, either manually using the Service snap-in, or using `sc.exe config "OctopusDeploy Tentacle" obj= Domain\Username$`. 3. Restart the Tentacle Windows Service. Learn about [using Managed Service Accounts](https://technet.microsoft.com/en-us/library/dd548356(v=ws.10).aspx). # Signing Keys Source: https://octopus.com/docs/infrastructure/signing-keys.md Octopus uses a Signing Key to sign the generated authorization request tokens used in the authentication flow for OpenID Connect. The public signing key is used by the resource server to validate the token supplied by Octopus. The signing keys by default have a 90-day expiry and will be rotated when they expire. :::div{.warning} Since OpenID Connect authentication is still an EAP feature, there is no User Interface to manage or view the Signing Keys. The following API endpoints can be used to manage the Signing Keys: List all keys: `GET` `/api/signingkeys/v1` Rotate the active key: `POST` `/api/signingkeys/rotate/v1` Revoke a signing key: `POST` `/api/signingkeys/{id}/revoke/v1` ::: # Scaling Behavior Source: https://octopus.com/docs/infrastructure/workers/kubernetes-worker/scaling-behaviour.md We have developed Kubernetes worker as a scalable solution for running your deployment tasks efficiently. The Worker is designed to make the most of your infrastructure resources, minimizing usage when deployments are not active while allowing you to run many deployments simultaneously. To achieve this, the worker creates temporary pods for each new deployment task. When there are no deployment tasks running, the Kubernetes worker will shrink to just a couple of pods, consuming minimal resources on your cluster. Any released resources can then be utilized by other applications on the cluster. If you don't have other applications that can make use of the released resources, you can take an additional step towards scalability by configuring the Kubernetes cluster to scale together with the worker, adding and removing nodes as needed. This article will provide a detailed explanation of how to configure Kubernetes autoscaling with Kubernetes worker. ## Kubernetes Horizontal Scaling ### Default In a Kubernetes cluster, a pod creation request includes the resources (cpu/memory) the pod requires. If sufficient resources are not available, the pod is created in the "Pending" state. When required resources are available, the pod is started. Resources become available through 2 methods: 1. Pod termination, returning the allocated resources to the cluster 2. More nodes are added to the cluster Kubernetes decides when to start a pod based on resource _requests_, not usage. Meaning, if a Pod uses more resources than it requested, other pods may be starved of resources. When using the inbuilt horizontal-scaler it is important that a pod's requested resources roughly match expected usage. ## How a Kubernetes worker requests resources A script-pod requests a default of 2.5% of a CPU core and 100 megabytes of memory (`.Values.scriptPods.resources.requests`). As such, a kubernetes cluster will (attempt to) provision a new node when **40** script-pods are running simultaneously. For most cases this will result in a usable system, but it is dependent on the work being performed, and the capabilities of the nodes. If the cluster hosting your workers exhibits high CPU load, increasing the script-pod's requested resources may result in better performance. Resource requests limits can be defined manually via the `.Values.scriptPods.resources` value, its content follows existing [Kubernetes structures](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). ### Advanced Options Kubernetes supports [Kubernetes Event Driven Autoscaling](https://keda.sh/) which allows Kubernetes nodes to be added/remove according to user-defined rules (eg cpu usage, rather than request). This may be an appropriate solution for complex deployment systems. # Worker pools Source: https://octopus.com/docs/infrastructure/workers/worker-pools.md Worker pools are groups of [workers](/docs/infrastructure/workers). When a task is assigned to a worker pool, the task will be executed by one of the workers in that pool. You can manage your worker pools by navigating to **Infrastructure ➜ Worker Pools** in the Octopus Web Portal. ## Multiple Pools You can create multiple worker pools for different purposes. Worker pools are available to all projects in a [space](/docs/administration/spaces), and can be dynamically selected using [worker pool variables](/docs/projects/variables/worker-pool-variables). All users can see which pools are available and if there are workers in the pools. Only a user with the `ConfigureServer` permission can see the worker machines or edit workers or pools. Using multiple worker pools allows you to configure the workers in your pools for the tasks they will be assigned. For instance, depending on your teams needs you might configure worker pools in the following ways: - Pools for scripts that expect different operating systems (Linux vs. Windows). - Pools for scripts that rely on different versions of cloud CLI tools: Azure CLI, AWS CLI, Terraform CLI, Kubernetes CLI (kubectl) etc. - Pools in special subnets or access to protected servers, such as for database deployments or in DMZs For your default pool it might be enough that the workers are Tentacles running PowerShell 5, but you might have two teams working with different version of an SDK and so provision worker pools with workers running the appropriate SDK for each team. ## Default Worker Pool You can specify a worker pool which will be selected by default when creating new worker steps, and will be used for any worker steps that don't specify a pool. There is always a default worker pool for each [space](/docs/administration/spaces). The default pool can't be deleted, but you can swap which pool is the default. On [self-hosted Octopus](/docs/getting-started#self-hosted-octopus), an empty `Default Worker Pool` is provided. - Initially this pool is empty, which means the [built-in worker](/docs/infrastructure/workers/built-in-worker) will be used - When you add workers to the default worker pool, the built-in worker will be disabled. This means any deployment processes that previously used the built-in worker on the Octopus Server will automatically start using workers in the worker pool. On [Octopus Cloud](/docs/octopus-cloud), the initial default pool is a [dynamic worker pool](/docs/infrastructure/workers/dynamic-worker-pools) using the `Windows (default)` image. A pool using the `Ubuntu (default)` image is also provided. ## How Workers are Selected When a step that requires a worker is executed, Octopus first determines what worker pool the step should use, and then selects a worker from that pool to execute the step. For a step that requires a worker, Octopus selects: - The default pool, if no pool is selected (or the step targets the Octopus Server). - The specified pool. When the pool has been selected, Octopus selects a worker from the pool: - A healthy worker from the selected pool. - The built-in worker, if the step resolves to the default pool, but there are no workers in the default pool. Note, if there are unhealthy workers in the pool, the built-in worker will **not** run. It will only run if there are no workers in the pool. Octopus makes no other guarantees about which worker is picked from a pool. The step will fail for lack of a worker if: - The step resolves to the built-in worker but it has been disabled. - There are no healthy workers in the pool. - Octopus selects a healthy worker from the pool, but during the deployment process can't contact the worker. ## Add new Worker Pools Only users with the `ConfigureServer` permission can add or edit worker pools. 1. Navigate to **Infrastructure ➜ Worker Pools** in the **Octopus Web Portal** and click **ADD WORKER POOL**. 1. Give the worker pool a meaningful name. 1. If this pool should be the default worker pool expand the **Default Worker Pool** section and the default check-box. 1. Give the worker pool a description. You can add as many worker pools as you need. ## Configuring a step to use a Worker Pool If there are worker pools configured, any step that requires a worker can be targeted at any pool. It's possible to use multiple pools in the one deployment process, for example, if you configured one pool of workers for script steps and another for Azure deployments. Once there are worker pools configured, the **Octopus Web Portal** will ensure a pool is set for any step that requires a worker. :::div{.hint} **What's shown in the UI?** The **Octopus Web Portal** is worker pool aware. If you haven't configured pools or workers, the only option for steps that require a worker is the built-in worker, so the UI will only display the option to run a step on the `Octopus Server`. In this case, Azure, AWS and Terraform steps will assume the default and display no choice. If you have configured extra workers or pools, script, Azure, AWS and Terraform steps will allow the selection of a worker pool. ::: ## Configuring a cloud target to have a Default Worker Pool Cloud targets such as [Cloud regions](/docs/infrastructure/deployment-targets/cloud-regions/) and [Kubernetes targets](/docs/kubernetes/targets/kubernetes-api) can set their own default worker pool, both for deployment steps and [health checks](/docs/infrastructure/deployment-targets/machine-policies/#health-check). If a step is targeted at a cloud target and the worker pool selected for the step is the default pool, the cloud target's default pool is used. This allows setting up workers that are co-located with cloud targets. Another option is locking down cloud targets so the only machines that can deploy are co-located polling workers. ## Variables When a step is run on a worker, the following variables are available: | Name and description | Example | | -------------------- | ------------------------| | **`Octopus.WorkerPool.Id`**
The Id of the pool | WorkerPools-1 | | **`Octopus.WorkerPool.Name`**
The name of the pool | Default Worker Pool | ## Removing worker pools For projects using Config as Code, it's up to you to take care to avoid deleting any worker pools required by your deployments or runbooks. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. ## Workers Q&A *I've added only a single worker to the default pool, won't that machine get overworked?* Your server has a task cap governing how many deployment tasks can run in parallel. Variable `Octopus.Action.MaxParallelism` then governs the amount of parallelism Octopus allows within a deployment task. The amount of work the built-in worker could be asked to do at once is governed by these two numbers. With external workers, it's the same, so a single external worker is only being asked to do the same amount of work the built-in was doing. However, workers does give you the capability to spread that work over a number of machines, and to scale up how much work is being done. *If the workers in the default pool aren't healthy will the built-in worker run?* No, the built-in worker can only run if there are no workers in the default pool. *Can I leave the default pool empty, so some scripts do run on the server, but also provision other pools?* Yes, the existence of other pools doesn't affect the behavior of the default pool. *How can I cordon off my worker pools so each team only has access to certain pools?* With the [Spaces](/docs/administration/spaces) feature of Octopus Deploy you can partition your Octopus Server so that each of your teams can only access the projects, environments, and infrastructure, including workers. *I see "leases" being taken out on particular workers in the deployment logs, can I get an exclusive lease for my deployment and clean off the worker once I'm done?* Not yet. At the moment, the only time an exclusive lease is taken out is if a Tentacle upgrade runs on a worker. We are thinking about features that allow exclusive access for deployments. ## Learn more - [Worker blog posts](https://octopus.com/blog/tag/workers/1) - [Worker pool variables](/docs/projects/variables/worker-pool-variables) # GCP File Storage Source: https://octopus.com/docs/installation/file-storage/gcp-file-storage.md Google Cloud offers its own managed file storage option known as [Filestore](https://cloud.google.com/filestore), however it's only accessible via the [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System) (v3). :::div{.hint} Typically, NFS shares are better suited to Linux or macOS clients, although it is possible to access NFS shares on Windows Servers. NFS shares on Windows are mounted per-user and are not persisted when the server reboots. It's for these reasons that Octopus recommends using SMB storage over NFS when running on Windows Servers. ::: You can see the different file server options Google Cloud has in their [File Storage on Compute Engine](https://cloud.google.com/architecture/filers-on-compute-engine) overview. ## Filestore using NFS Once you have [created a Filestore instance](https://cloud.google.com/filestore/docs/creating-instances), the best option is to mount the NFS share using the `LocalSystem` account, and then create a [symbolic link](https://en.wikipedia.org/wiki/Symbolic_link) pointing at a local folder, for example `C:\OctopusShared\` for the Artifacts, Packages, TaskLogs, Imports, and EventExports folders which need to be available to all nodes. See more information about [Windows NFS and Octopus Deploy](/docs/installation/file-storage/windows-nfs). ## High Availability With Octopus Deploy's [High Availability](/docs/administration/high-availability) functionality, you connect multiple nodes to the same database and file storage. Octopus Server makes specific assumptions about the performance and consistency of the file system when accessing log files, performing log retention, storing deployment packages and other deployment artifacts, exported events, and temporary storage when communicating with Tentacles. What that means is: - Octopus Deploy is sensitive to network latency. It expects the file system to be hosted in the same data center as the virtual machines or container hosts running the Octopus Deploy Service. - It is extremely rare for two or more nodes to write to the same file at the same time. - It is common for two or more nodes to read the same file at the same time. In our experience, you will have the best experience when all the nodes and the file system are located in the same data center. Modern network storage devices and operating systems handle almost all the scenarios a highly available instance of Octopus Deploy will encounter. ## Disaster Recovery For disaster recovery scenarios, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). To achieve this with GCP you have several options available. Further details on the redundancy options available for Filestore can be found [here](https://cloud.google.com/architecture/filers-on-compute-engine#filestore-basic). ### Zonal Zonal availability provided by GCP will replicate your data across a single zone. This will protect against simple hardware failure but provides no protection against a zone or region failure. ### Regional Regional availability provided by GCP will replicate your data across several zones within the same region. This will protect against the failure of one or more zones but provides no protection against a region failure. Whether you use Zonal or Regional availability, it would be necessary to create [backups](https://cloud.google.com/filestore/docs/backups#backing_up_data_for_disaster_recovery) of the Filestore in a different region to ensure data resilience. The backup can be configured within a [scheduled job](https://cloud.google.com/filestore/docs/scheduling-backups). :::div{.warning} In the event of a failure of the primary region, it would be necessary to restore the backup of your Filestore to a secondary region and reconfigure Octopus to point to the new region. There may be some data loss to consider in this scenario based on how often your Filestore backups are taken. ::: # GCP Load Balancers Source: https://octopus.com/docs/installation/load-balancers/gcp-load-balancers.md To distribute traffic to the Octopus web portal on multiple nodes, you need to use a load balancer. Google Cloud provides two options you should consider to distribute HTTP/HTTPS traffic to your Compute Engine instances. * [External HTTP(S) Load Balancer](https://cloud.google.com/load-balancing/docs/https) * [External TCP Network Load Balancer](https://cloud.google.com/load-balancing/docs/network) If you are *only* using [Listening Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended), we recommend using the HTTP(S) Load Balancer. However, [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) aren't compatible with the HTTP(S) Load Balancer, so instead, we recommend using the Network Load Balancer. It allows you to configure TCP Forwarding rules on a specific port to each compute engine instance, which is [one way to route traffic to each individual node](#using-a-unique-port) as required for Polling Tentacles when running Octopus High Availability. To use Network Load Balancers exclusively for Octopus High Availability with Polling Tentacles you'd potentially need to configure multiple load balancer(s) / forwarding rules: - One to serve the Octopus Web Portal HTTP traffic to your backend pool of Compute engine instances: ![Network Load Balancer for Web portal](/docs/img/administration/high-availability/design/images/gcp-octopus-nlb-web-portal.png) - One *for each* Compute engine instance for Polling Tentacles to connect to: ![Network Load Balancer for Polling Tentacles](/docs/img/administration/high-availability/design/images/gcp-octopus-nlb-polling.png) With Network Load Balancers, you can configure a health check to ensure your Compute engine instances are healthy before traffic is served to them: :::figure ![Network Load Balancer health check](/docs/img/administration/high-availability/design/images/gcp-octopus-nlb-health-check.png) ::: # Octopus Server in Kubernetes Source: https://octopus.com/docs/installation/octopus-server-linux-container/octopus-in-kubernetes.md One of the driving forces behind creating the Octopus Server Linux Container was so Octopus could run in a container in Kubernetes for [Octopus Cloud](/docs/octopus-cloud). With the release of the Octopus Server Linux Container image in **2020.6**, this option is available for those who want to host Octopus in their own Kubernetes clusters. This page describes how to run Octopus Server in Kubernetes, along with platform specific considerations when using different Kubernetes providers such as Azure AKS and Google GKE. Since [Octopus High Availability](/docs/administration/high-availability) (HA) and Kubernetes go hand in hand, this guide will show how to support scaling Octopus Server instances with multiple HA nodes. It assumes a working knowledge of Kubernetes concepts, such as Pods, Services, Persistent volume claims and Stateful Sets. ## Pre-requisites \{#pre-requisites} Whether you are running Octopus in a Container using Docker or Kubernetes, or running it on Windows Server, there are a number of items to consider when creating an Octopus High Availability cluster: - A Highly available [SQL Server database](/docs/installation/sql-server-database) - A shared file system for [Artifacts, Packages, Task Logs, and Event Exports](/docs/administration/managing-infrastructure/server-configuration-and-file-storage/#file-storage) - A [Load balancer](/docs/administration/high-availability/load-balancing) for traffic to the Octopus Web Portal - Access to each Octopus Server node for [Polling Tentacles](/docs/administration/high-availability/polling-tentacles-with-ha) The following sections describe these in more detail. ### SQL Server Database \{#sql-database} How the database is made highly available is really up to you; to Octopus, it's just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many [options for high availability with SQL Server](https://msdn.microsoft.com/en-us/library/ms190202.aspx), and [Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering](http://www.brentozar.com/sql/sql-server-failover-cluster/) if you are looking for an introduction and practical guide to setting it up. Octopus High Availability works with: - [SQL Server Failover Clusters](https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/high-availability-solutions-sql-server) - [SQL Server AlwaysOn Availability Groups](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) If you plan to host Octopus in Kubernetes using one of managed Kubernetes platforms from Cloud providers, for example AWS, Azure, or GCP, then a good option to consider for your SQL Server database is their database PaaS offering as well. For more details on the different hosted database options, refer to the documentation for each Cloud provider: - [AWS RDS](https://aws.amazon.com/rds/sqlserver/) - [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database) - [Google Cloud SQL for SQL Server](https://cloud.google.com/sql/sqlserver) #### Running SQL in a Container \{#running-sql-in-container} Its possible to run SQL Server in a container. This can be useful when running a Proof of Concept (PoC) with Octopus in Kubernetes. The following YAML creates a single instance of SQL Server Express that can be deployed to a Kubernetes cluster. It creates a [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes) to store the database files, a [service](https://kubernetes.io/docs/concepts/services-networking/service/) to expose the database internally, and the database itself. :::div{.warning} Although Octopus [supports SQL Server Express](/docs/installation/sql-server-database/#sql-server-database), the edition has limitations. For more details, see the [Microsoft SQL Server editions](https://docs.microsoft.com/sql/sql-server/editions-and-components-of-sql-server-version-15?view=sql-server-ver15#-editions) documentation. ::: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mssql-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi --- apiVersion: v1 kind: Service metadata: name: mssql spec: type: ClusterIP ports: - port: 1433 targetPort: 1433 protocol: TCP selector: app: mssql --- apiVersion: apps/v1 kind: Deployment metadata: name: mssql-deployment labels: app: mssql spec: selector: matchLabels: app: mssql replicas: 1 strategy: type: Recreate template: metadata: labels: app: mssql spec: terminationGracePeriodSeconds: 10 securityContext: fsGroup: 10001 volumes: - name: mssql_db persistentVolumeClaim: claimName: mssql-data containers: - name: mssql image: mcr.microsoft.com/mssql/server:2019-latest ports: - containerPort: 1433 env: - name: MSSQL_PID value: Express - name: ACCEPT_EULA value: 'Y' - name: SA_PASSWORD value: Password01! volumeMounts: - name: mssql_db mountPath: /var/opt/mssql ``` :::div{.hint} **Change the SA Password:** If you use the YAML definition above, remember to change the `SA_PASSWORD` from the value used here. ::: ### Load balancer \{#load-balancer} A Load balancer is required to direct traffic to the Octopus Web Portal, and optionally a way to access each of the Octopus Server nodes in an Octopus High Availability cluster may be required if you're using [Polling Tentacles](/docs/administration/high-availability/polling-tentacles-with-ha). ### Octopus Web Portal load balancer \{#octopus-web-portal-load-balancer} The Octopus Web Portal is a React single page application (SPA) that can direct all backend requests to any Octopus Server node. This means we can expose all Octopus Server nodes through a single load balancer for the web interface. The following YAML creates a load balancer service directing web traffic on port `80` to pods with the label `app:octopus`: ```yaml apiVersion: v1 kind: Service metadata: name: octopus-web spec: type: LoadBalancer ports: - name: web port: 80 targetPort: 8080 protocol: TCP selector: app: octopus ``` #### Octopus Server Node load balancer \{#octopus-node-load-balancers} Unlike the Octopus Web Portal, Polling Tentacles must be able to connect to each Octopus node individually to pick up new tasks. Our Octopus HA cluster assumes two nodes, therefore a load balancer is required for each node to allow direct access. The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943`, and gRPC traffic on port `8443`. The `octopus-0` load balancer: ```yaml apiVersion: v1 kind: Service metadata: name: octopus-0 spec: type: LoadBalancer ports: - name: web port: 80 targetPort: 8080 protocol: TCP - name: tentacle port: 10943 targetPort: 10943 protocol: TCP - name: gRPC port: 8443 targetPort: 8443 protocol: TCP selector: statefulset.kubernetes.io/pod-name: octopus-0 ``` The `octopus-1` load balancer: ```yaml apiVersion: v1 kind: Service metadata: name: octopus-1 spec: type: LoadBalancer ports: - name: web port: 80 targetPort: 8080 protocol: TCP - name: tentacle port: 10943 targetPort: 10943 protocol: TCP - name: gRPC port: 8443 targetPort: 8443 protocol: TCP selector: statefulset.kubernetes.io/pod-name: octopus-1 ``` Note the selectors of: - `statefulset.kubernetes.io/pod-name: octopus-0` and - `statefulset.kubernetes.io/pod-name: octopus-1` These labels are added to pods created as part of a [Stateful Set](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset), and the values are the combination of the Stateful Set name and the pod index. For more information on Polling Tentacles with High Availability refer to our [documentation](/docs/administration/high-availability/polling-tentacles-with-ha) on the topic. ### File Storage \{#file-storage} To share common files between the Octopus Server nodes, we need access to a minimum of three shared volumes that multiple pods can read to and write from simultaneously: - Artifacts - Packages - Task Logs - Event Exports These are created via [persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes) with an [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) of `ReadWriteMany` to indicate they are shared between multiple pods. Most of the YAML in this guide can be used with any Kubernetes provider. However, the YAML describing file storage can have differences between each Kubernetes provider as they typically expose different names for their shared file systems via the `storageClassName` property. :::div{.hint} To find out more about storage classes, refer to the [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) documentation. ::: While it's possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions. The next sections describe how to create file storage for use with Octopus running in Kubernetes using different Kubernetes providers to dynamically provision file storage. #### AKS storage \{#aks-storage} The following YAML creates the shared persistent volume claims that will host the artifacts, built-in feed packages, the task logs, and event exports using the `azurefile` storage class, which is specific to Azure AKS: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: artifacts-claim spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: repository-claim spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-logs-claim spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: event-exports-claim spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 1Gi ``` #### GKE storage \{#gke-storage} The following YAML creates the shared persistent volume claims that will host the artifacts, built-in feed packages, the task logs, and event exports using the `standard-rwx` storage class from the Google [Filestore CSI driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver). :::div{.hint} **GKE Cluster version pre-requisite:** To use the Filestore CSI driver, your clusters must use **GKE version 1.21 or later**. The Filestore CSI driver is supported for clusters using Linux. ::: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: artifacts-claim spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: repository-claim spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-logs-claim spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: event-exports-claim spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Gi ``` If you are running a GKE cluster in a non-default VPC network in Google Cloud, you may need to define your own storage class specifying the network name. The following YAML shows creating a storage class that can be used with a non-default VPC network in GKE called `my-custom-network-name`: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-custom-network-csi-filestore provisioner: filestore.csi.storage.gke.io parameters: # "CIDR range to allocate Filestore IP Ranges from" # reserved-ipv4-cidr: 192.168.92.22/26 # # standard (default) or premier or enterprise # tier: premier # # Name of the VPC. Note that non-default VPCs require special firewall rules to be setup. network: my-custom-network-name allowVolumeExpansion: true ``` :::div{.hint} Firewall rules may also need to be configured to allow the Kubernetes cluster access to the Filestore. Refer to the [Filestore Firewall rules](https://cloud.google.com/filestore/docs/configuring-firewall) for further details. ::: Once the storage class has been defined, you can mount your persistent volume claims using the name of the storage class. In the example above that was named `my-custom-network-csi-filestore`. ### Sharing a single volume \{#sharing-single-volume} Defining multiple persistent volume claims results in multiple storage buckets being created, one for each claim. This can result in an increased storage cost. Another option is to create a single persistent volume claim that can be shared for each directory needed for Octopus. The following YAML shows an example of creating a single persistent volume claim, using the `azurefile` storage class: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: octopus-storage-claim spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 4Gi ``` In the `volumes` definition of the Stateful Set (used for your [Octopus Server nodes](#octopus-server-nodes)), you can mount the single volume: ```yaml volumes: - name: octopus-storage-vol persistentVolumeClaim: claimName: octopus-storage-claim ``` Then you can reference the volume multiple times in the `volumeMounts` definition. This is achieved by using the [volumeMounts.subPath](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) property to specify a sub-path inside the referenced volume instead of it's root: ```yaml volumeMounts: - name: octopus-storage-vol mountPath: /repository subPath: repository - name: octopus-storage-vol mountPath: /artifacts subPath: artifacts - name: octopus-storage-vol mountPath: /taskLogs subPath: taskLogs - name: octopus-storage-vol mountPath: /eventExports subPath: eventExports ``` ## Deploying the Octopus Server Once the pre-requisites are in place, we need to install the Octopus Server. This can be done via our official Helm chart, or the Kubernetes resources can be directly deployed. ### Helm Chart \{#helm-chart} An [official chart](https://github.com/OctopusDeploy/helm-charts/tree/main/charts/octopus-deploy) for deploying Octopus in a Kubernetes cluster via [Helm](https://helm.sh) is available. See the [usage instructions](https://github.com/OctopusDeploy/helm-charts/tree/main/charts/octopus-deploy#usage) for details of how to install the Helm chart. The chart packages are available on [DockerHub](https://hub.docker.com/r/octopusdeploy/octopusdeploy-helm). ## Octopus Server nodes \{#octopus-server-nodes} As an alternative to using Helm, you can directly install the Kubernetes resources into your cluster. Octopus requires the server component to be installed as a [Stateful Set](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset). A Stateful Set is used rather than a Kubernetes Deployment, as the Stateful Set provides: - Fixed names - Consistent ordering - An initial deployment process that rolls out one pod at a time, ensuring each is healthy before the next is started. This functionality works very nicely when deploying Octopus, as we need to ensure that Octopus instances start sequentially so only one instance attempts to apply updates to the database schema. However redeployments (e.g. when upgrading) do need special consideration, see the [Upgrading Octopus in Kubernetes](#upgrading-octopus-in-kubernetes) section for further details. The following YAML below creates a Stateful Set with two pods. These pods will be called `octopus-0` and `octopus-1`, which will also be the value assigned to the `statefulset.kubernetes.io/pod-name` label. This is how we can link services exposing individual pods. The pods then mount a single shared volume for the artifacts, built-in feed packages, task logs, the server task logs, and event exports for each pod. ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: octopus spec: selector: matchLabels: app: octopus serviceName: "octopus" replicas: 2 template: metadata: labels: app: octopus spec: affinity: # Try and keep Octopus nodes on separate Kubernetes nodes podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - octopus topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 10 volumes: - name: octopus-storage-vol persistentVolumeClaim: claimName: octopus-storage-claim containers: - name: octopus image: octopusdeploy/octopusdeploy:2022.4.8471 securityContext: privileged: true env: - name: ACCEPT_EULA # "Y" means accepting the EULA at https://octopus.com/company/legal value: "Y" - name: OCTOPUS_SERVER_NODE_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DB_CONNECTION_STRING value: Server=mssql,1433;Database=Octopus;User Id=SA;Password=Password01! - name: ADMIN_USERNAME value: admin - name: ADMIN_PASSWORD value: Password01! - name: ADMIN_EMAIL value: admin@example.org - name: OCTOPUS_SERVER_BASE64_LICENSE # Your base64 encoded license key goes here. When using more than one node, a HA license is required. Without a HA license, the stateful set can have a replica count of 1. value: - name: MASTER_KEY # The base64 Master Key to use to connect to an existing database. If not supplied, and the database does not exist, it will generate a new one. value: ports: - containerPort: 8080 name: web - containerPort: 10943 name: tentacle volumeMounts: - name: octopus-storage-vol mountPath: /artifacts subPath: artifacts - name: octopus-storage-vol mountPath: /repository subPath: repository - name: octopus-storage-vol mountPath: /taskLogs subPath: taskLogs - name: octopus-storage-vol mountPath: /eventExports subPath: eventExports - name: octopus-storage-vol mountPath: /home/octopus/.octopus/OctopusServer/Server/Logs subPathExpr: serverLogs/$(OCTOPUS_SERVER_NODE_NAME) lifecycle: preStop: exec: command: - /bin/bash - -c - '[[ -f /Octopus/Octopus.Server ]] && EXE="/Octopus/Octopus.Server" || EXE="dotnet /Octopus/Octopus.Server.dll"; $EXE node --instance=OctopusServer --drain=true --wait=600 --cancel-tasks;' # postStart must finish in 5 minutes or the container will fail to create postStart: exec: command: - /bin/bash - -c - 'URL=http://localhost:8080; x=0; while [ $x -lt 9 ]; do response=$(/usr/bin/curl -k $URL/api/octopusservernodes/ping --write-out %{http_code} --silent --output /dev/null); if [ "$response" -ge 200 ] && [ "$response" -le 299 ]; then break; fi; if [ "$response" -eq 418 ]; then [[ -f /Octopus/Octopus.Server ]] && EXE="/Octopus/Octopus.Server" || EXE="dotnet /Octopus/Octopus.Server.dll"; $EXE node --instance=OctopusServer --drain=false; now=$(date); echo "${now} Server cancelling drain mode." break; fi; now=$(date); echo "${now} Server is not ready, can not disable drain mode."; sleep 30; done;' readinessProbe: exec: command: - /bin/bash - -c - URL=http://localhost:8080; response=$(/usr/bin/curl -k $URL/api/serverstatus/hosted/internal --write-out %{http_code} --silent --output /dev/null); /usr/bin/test "$response" -ge 200 && /usr/bin/test "$response" -le 299 || /usr/bin/test initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 5 failureThreshold: 60 livenessProbe: exec: command: - /bin/bash - -c - URL=http://localhost:8080; response=$(/usr/bin/curl -k $URL/api/octopusservernodes/ping --write-out %{http_code} --silent --output /dev/null); /usr/bin/test "$response" -ge 200 && /usr/bin/test "$response" -le 299 || /usr/bin/test "$response" -eq 418 periodSeconds: 30 timeoutSeconds: 5 failureThreshold: 10 startupProbe: exec: command: - /bin/bash - -c - URL=http://localhost:8080; response=$(/usr/bin/curl -k $URL/api/octopusservernodes/ping --write-out %{http_code} --silent --output /dev/null); /usr/bin/test "$response" -ge 200 && /usr/bin/test "$response" -le 299 || /usr/bin/test "$response" -eq 418 failureThreshold: 30 periodSeconds: 60 ``` :::div{.hint} **Change the Default values:** If you use the YAML definition above, remember to change the default values entered including the Admin Username, Admin Password, and the version of the `octopusdeploy/octopusdeploy` image to use. You also need to provide values for the License Key and database Master Key. ::: Once fully deployed, this Stateful Set configuration will have three load balancers, and three public IPs. The `octopus-web` service is used to access the web interface. The Octopus Web Portal can make requests to any node, so load balancing across all the nodes means the web interface is accessible even if one node is down. The `octopus-0` service is used to point Polling Tentacles to the first node, and the `octopus-1` service is used to point Polling Tentacles to the second node. We have also exposed the web interface through these services, which gives the ability to directly interact with a given node, but the `octopus-web` service should be used for day to day work as it is load balanced. The next sections describe the Stateful Set definition in more detail. ### Octopus Server Pod affinity \{#server-pod-affinity} For a greater degree of reliability, [Pod anti-affinity rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) are used in the Stateful Set to ensure Octopus pods are not placed onto the same node. This ensures the loss of a node does not bring down the Octopus High Availability cluster. ```yaml affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - octopus topologyKey: kubernetes.io/hostname ``` ### Octopus Server Pod logs \{#server-pod-logs} In addition to the shared folders that are mounted for Packages, Artifacts, Task Logs, and Event Exports, each Octopus Server node (Pod) also writes logs to a local folder in each running container. To mount the same volume used for the shared folders for the server logs, we need a way to create a sub-folder on the external volume that's unique to each Octopus Server node running in a Pod. It's possible to achieve this using a special expression known as [subPathExpr](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment). The server logs folder is mounted to the unique sub-folder determined by the environment variable `OCTOPUS_SERVER_NODE_NAME`, which is simply the pod name. ```yaml - name: octopus-storage-vol mountPath: /home/octopus/.octopus/OctopusServer/Server/Logs subPathExpr: serverLogs/$(OCTOPUS_SERVER_NODE_NAME) ``` An alternative option to using an external volume mount is to use a [volume claim template](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). For each VolumeClaimTemplate entry defined in a `StatefulSet`, each Pod receives one PersistentVolumeClaim. When a Pod is (re-)scheduled onto a node, its `volumeMounts` mount the PersistentVolumes associated with its PersistentVolume Claims. ```yaml volumeClaimTemplates: - metadata: name: server-logs-vol spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 200Mi ``` You can then mount the folder for the server logs, with each Octopus Server Node (Pod) getting its own persistent storage: ```yaml - name: server-logs-vol mountPath: /home/octopus/.octopus/OctopusServer/Server/Logs ``` :::div{.hint} **Note:** The PersistentVolumes associated with the Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually. ::: ### Container lifecycle hooks \{#container-lifecycle-hooks} The `preStop` hook is used to drain an Octopus Server node before it is stopped. This gives the node time to complete any running tasks and prevents it from starting new tasks. The `postStart` start hook does the reverse and disables drain mode when the Octopus Server node is up and running. ### Readiness, start up, and liveness probes \{#container-probes} The `readinessProbe` is used to ensure the Octopus Server node is responding to network traffic before the pod is marked as ready. The `startupProbe` is used to delay the livenessProbe until such time as the node is started, and the `livenessProbe` runs continuously to ensure the Octopus Server node is functioning correctly. ### UI-only and back-end nodes \{#ui-and-backend-nodes} When managing an Octopus High Availability cluster, it can be beneficial to separate the Octopus Web Portal from the deployment orchestration of tasks that Octopus Server provides. It's possible to create *UI-only* nodes that have the sole responsibility to serve web traffic for the Octopus Web Portal and the [Octopus REST API](/docs/octopus-rest-api). :::div{.hint} By default, all Octopus Server nodes are task nodes because the default task cap is set to `5`. To create UI-only Octopus Server nodes, you need to set the task cap for each node to `0`. ::: When running Octopus in Kubernetes, it'd be nice to increase the `replicaCount` property and direct web traffic to only certain pods in our Stateful Set. However, it takes additional configuration to set-up UI only nodes as the Stateful Set workload we created previously has web traffic directed to pods with the label `app:octopus`. It's not currently possible to use [Match expressions](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements) to direct web traffic to only certain pods. In order to create UI-only nodes in Kubernetes, you need to perform some additional configuration: - Create an additional Stateful Set just for the UI-only nodes, for example called `octopus-ui`. - Change the [container lifecycle hooks](#container-lifecycle-hooks) for the `octopus-ui` Stateful Set to ensure the nodes don't start drained, and includes the `node` command to [set the task cap](/docs/octopus-rest-api/octopus.server.exe-command-line/node) to `0`. - Update the `octopus-web` Load balancer Service to direct traffic to pods with the label `app:octopus-ui`. :::div{.hint} If you use Polling Tentacles, you don't need to export port `10943` on the `LoadBalancer` Service definition for the UI-only nodes as they won't be responsible for handling deployments or other tasks. For the same reason, you don't need to configure any Polling Tentacles to poll UI-only nodes. ::: ### Accessing Server node logs \{#access-pod-server-logs} When running Octopus Server on Windows Server, to access the logs for an Octopus Server Node, you'd typically either log into the Server using Remote Desktop and access them locally, or you might [publish the logs to a centralized logging tool](https://help.octopus.com/t/how-can-i-configure-octopus-deploy-to-write-logs-to-a-centralized-logger-such-as-seq-splunk-or-papertrail/24551). In Kubernetes there are a number of different options to access the Octopus Server Node logs. Using `kubectl` you can access the logs for each pod by running the following commands: ```bash # Get logs for Node 0 kubectl logs octopus-0 # Get logs for Node 1 kubectl logs octopus-1 ``` If you want to watch the logs in real-time, you can tail the logs using the following commands: ```bash # Tail logs for Node 0 kubectl logs octopus-0 -f # Tail logs for Node 1 kubectl logs octopus-1 -f ``` Sometimes it can be useful to see all logs in one. For this you can use the `octopus` label selector: ```bash kubectl logs -l app=octopus ``` You can also view the logs interactively. Here is an example opening an interactive shell to the `octopus-0` pod and tailing the Server logs: ```bash kubectl exec -it octopus-0 bash # Change PWD to Logs folder cd /home/octopus/.octopus/OctopusServer/Server/Logs # Tail logs sudo tail OctopusServer.txt ``` If you've configured your Octopus Server node logs to be mounted to a [unique folder per pod on an external volume](#server-pod-logs) then it's possible to mount the external volume on a virtual machine that can access the volume. Here is an example of installing the necessary tooling and commands to mount a Google [NFS Filestore](https://cloud.google.com/filestore/docs/creating-instances) volume called `vol`, accessible by the private IP address of `10.0.0.1` in a Linux VM: ```bash # Install tools sudo apt-get -y update && sudo apt-get install nfs-common # Make directory to mount sudo mkdir -p /mnt/octo-ha-nfs # Mount NFS sudo mount 10.0.0.1:/vol1 /mnt/octo-ha-nfs # tail logs sudo tail /mnt/octo-ha-nfs/serverLogs/OctopusServer.txt ``` ## Upgrading Octopus in Kubernetes \{#upgrading-octopus-in-kubernetes} An initial deployment of the [Stateful Set described above](#octopus-server-nodes) works exactly as Octopus requires; one pod at a time is successfully started before the next. This gives the first node a chance to update the SQL schema with any required changes, and all other nodes start-up and share the already configured database. One limitation with Stateful Sets is how they process updates. For example, if the Docker image version was updated, by default the [rolling update strategy](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) is used. A rolling update deletes and recreates each pod, which means that during the update there will be a mix of old and new versions of Octopus. This won't work, as the new version may apply schema updates that the old version can not use, leading to unpredictable results at best, and could result in corrupted data. The typical solution to this problem is to use a [recreate deployment strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment). Unfortunately, Stateful Sets don't support the recreate strategy. What this means is that the Stateful Set can not be updated in place, but instead must be deleted and then a new version deployed. ### Delete the Stateful Set To delete the Stateful Set, first you can verify if the set exists by running the following `kubectl` command: ```bash kubectl get statefulset octopus -o jsonpath="{.status.replicas}" ``` This checks to see if the Stateful Set exists by retrieving it's `replicas` count. If the Stateful Set doesn't exist, then the command will complete with an error: ``` Error from server (NotFound): statefulsets.apps "octopus" not found. ``` If the Stateful Set exists, you can delete it by running the following `kubectl` command: ```bash kubectl delete statefulset octopus ``` ### Deploy new Stateful Set Once the old Stateful Set has been deleted, the new fresh copy of the Stateful Set can then be deployed. It will start the new pods one by one, allowing the database update to complete as expected. ## Octopus in Kubernetes with SSL It's recommended best practice to access your Octopus instance over a secure HTTPS connection. While this guide doesn't include specific instructions on how to configure access to Octopus Server in Kubernetes using an SSL/TLS certificate, there are many guides available. In Kubernetes, this can be configured using an [Ingress Controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/), for example [NGINX](https://kubernetes.github.io/ingress-nginx/user-guide/tls/). For web traffic destined for the Octopus Web Portal and REST API, you would terminate SSL on the ingress controller. For Polling Tentacles, passthrough would need to be allowed, usually on port `10943`. The following YAML creates an Ingress controller and routes to the service with name `octopus` on port `8080`: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: octopus-ingress-example spec: tls: - hosts: - your.octopus.domain.com # This assumes tls-secret exists and the SSL # certificate contains a CN for your.octopus.domain.com secretName: tls-secret ingressClassName: nginx rules: - host: your.octopus.domain.com http: paths: - path: / pathType: Prefix backend: # This assumes octopus exists and routes to healthy endpoints service: name: octopus port: number: 8080 ``` ## Trusting custom/internal Certificate Authority (CA) Octopus Server can interface with several external sources (feeds, git repos, etc.), and those sources are often configured to use SSL/TLS for secure communication. It is common for organizations to have their own Certificate Authority (CA) servers for their internal networks. A CA server can issue SSL certificates for internal resources, such as build servers or internally hosted applications, without purchasing from a third-party vendor. Technologies such as Group Policy Objects (GPO) can configure machines (servers and clients) to trust the CA automatically, so users don't have to configure trust for them manually. However, this is not inherited in Kubernetes containers. When attempting to configure a connection to an external resource with an untrusted CA, you'll most likely encounter an error similar to this: ``` Could not connect to the package feed. The SSL connection could not be established, see inner exception. The remote certificate is invalid because of errors in the certificate chain: UntrustedRoot ``` Kubernetes provides a method to incorporate a certificate without having to modify hosts or embed the certificate within the container by using a ConfigMap. ### Create ConfigMap To apply the certificate to the cluster, you'll need to first get the certificate file in either `.pem` or `.crt` format. Once you have the file, use `kubectl` to create a ConfigMap from it ```bash kubectl -n create configmap ca-pemstore --from-file=my-cert.pem ``` ### Add ConfigMap to YAML for Octopus Server With the ConfigMap created, add a `volumeMounts` component to the container section for Octopus Server. The following is an abbreviated portion of the YAML ```yaml ... containers: - name: octopus image: octopusdeploy/octopusdeploy:2022.4.8471 volumeMounts: - name: ca-pemstore mountPath: /etc/ssl/certs/my-cert.pem subPath: my-cert.pem readOnly: false ... ``` Add the following excerpt to the end of the Octopus Server YAML ```yaml volumes: - name: ca-pemstore configMap: name: ca-pemstore ``` ## Octopus in Kubernetes example \{#octopus-in-kubernetes-example} View a working example that deploys an Octopus High Availability configuration to a GKE Kubernetes cluster in our [samples instance](https://samples.octopus.app/app#/Spaces-105/projects/octopus-ha-in-gke/operations/runbooks/Runbooks-1862/process/RunbookProcess-Runbooks-1862). The runbook consists of a number of [Deploy Kubernetes YAML](/docs/deployments/kubernetes/deploy-raw-yaml) steps that deploy the resources discussed in this guide. # GCP Cloud SQL Source: https://octopus.com/docs/installation/sql-database/gcp-cloud-sql.md Each Octopus Server node stores project, environment, and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it's important that the database server is also highly available. To host the Octopus SQL database in GCP, there are two options to consider: - [SQL Server Virtual Machine instances](https://cloud.google.com/compute/docs/instances/sql-server/creating-sql-server-instances) - To run SQL Server on a VM, please refer to our [self-managed SQL Server guide](/docs/installation/sql-database/self-managed-sql-server). - [Cloud SQL](https://cloud.google.com/sql/sqlserver) ## High Availability The database is a critical component of Octopus Deploy. If the database is lost or destroyed all your configuration will be lost with it. We highly recommend leveraging a combination of backups and SQL Server's high availability functionality. We recommend using [high availability](https://cloud.google.com/sql/docs/sqlserver/high-availability) with Cloud SQL to ensure the resilience and availability of your Octopus database. This will ensure that your data is replicated across multiple zones within your primary region. In the event of a failure of the primary zone Cloud SQL automatically switches to the secondary zone. ## Disaster Recovery For disaster recovery scenarios, [we recommend leveraging a hot/cold configuration](https://octopus.com/whitepapers/best-practice-for-self-hosted-octopus-deploy-ha-dr). To achieve this with Cloud SQL we recommend using [Replication](https://cloud.google.com/sql/docs/sqlserver/replication). All database transactions performed in the primary region will be asynchronously replicated to the secondary region. In the event of a primary region failure we recommend following Google's [documentation](https://cloud.google.com/sql/docs/sqlserver/replication/cross-region-replicas) to promote the replica. Following this you would need to re-configure your Octopus connection string to use the newly promoted replica. :::div{.warning} When a disaster occurs, any data not synchronized will be lost. Depending on the replication speed, this could be up to a couple of minutes. ::: # Automated Installation Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/automated-installation.md ## Automated installation via Terraform The Kubernetes Agent can be installed and managed using a combination of the Kubernetes Agent [Helm chart >= v2.2.1](https://hub.docker.com/r/octopusdeploy/kubernetes-agent), [Octopus Deploy >= v0.30.0 Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest) and/or [Helm Terraform provider](https://registry.terraform.io/providers/hashicorp/helm). ### Octopus Deploy & Helm Using a combination of the Octopus Deploy and Helm providers you can completely manage the Kubernetes Agent via Terraform. :::div{.info} To ensure that the Kubernetes Agent is correctly installed as a deployment target in Octopus, certain criteria must hold for the following Terraform resource properties: | **Kubernetes Agent resource** | | **Helm resource (chart value)** | | ------------------------------------------------------------- | --------------------------------------------------------- | ------------------------------- | | `octopusdeploy_kubernetes_agent_deployment_target.name` | must be the same value as | `agent.name` | | `octopusdeploy_kubernetes_agent_deployment_target.uri` | must be the same value as | `agent.serverSubscriptionId` | | `octopusdeploy_kubernetes_agent_deployment_target.thumbprint` | is the thumbprint calculated from the certificate used in | `agent.certificate` | ::: :::div{.warning} Always specify the major version in the **version** property on the **helm_release** resource (e.g. `version = "2.*.*"`) to prevent Terraform from defaulting to the latest Helm chart version. This is important, as a newer major version of the Agent Helm chart could introduce breaking changes. When upgrading to a new major version of the Agent, create a separate resource to ensure the Helm values match the updated schema. [Automatic upgrade support](/docs/kubernetes/targets/kubernetes-agent/upgrading#automatic-updates-coming-in-20234) is expected in version 2023.4. ::: :::div{.warning} It is recommended to completely delete the Kubernetes namespace when removing a Kubernetes agent. If possible, prefer replacing `create_namespace = true` with an explicit Kubernetes namespace resource in your terraform configuration. ::: ```ruby terraform { required_providers { octopusdeploy = { source = "OctopusDeploy/octopusdeploy" version = "1.7.1" } helm = { source = "registry.terraform.io/hashicorp/helm" version = "3.1.1" } } } locals { octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://myinstance.octopus.app" octopus_grpc_address = "https://myinstance.octopus.app:8443" octopus_polling_address = "https://polling.myinstance.octopus.app" } provider "octopusdeploy" { address = local.octopus_address api_key = local.octopus_api_key } provider "helm" { kubernetes = { # Configure authentication for me } } data "octopusdeploy_teams" "everyone" { partial_name = "Everyone" skip = 0 take = 1 } resource "octopusdeploy_space" "monitoring" { name = "Kubernetes Examples" description = "Terraform created examples" space_managers_teams = [data.octopusdeploy_teams.everyone.teams[0].id] } resource "octopusdeploy_environment" "example" { name = "Example" space_id = octopusdeploy_space.monitoring.id } # Create the Kubernetes agent deployment target resource "octopusdeploy_polling_subscription_id" "agent_subscription_id" {} resource "octopusdeploy_tentacle_certificate" "agent_cert" {} resource "octopusdeploy_kubernetes_agent_deployment_target" "example" { name = "Example Kubernetes Agent" space_id = octopusdeploy_space.monitoring.id environments = [octopusdeploy_environment.example.id] roles = ["k8s-agent", "monitoring-enabled"] thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri } # Create the Kubernetes monitor resource "random_uuid" "monitor_installation" {} resource "octopusdeploy_kubernetes_monitor" "example" { space_id = octopusdeploy_space.monitoring.id installation_id = random_uuid.monitor_installation.result machine_id = octopusdeploy_kubernetes_agent_deployment_target.example.id } # Install the Kubernetes agent and monitor via Helm resource "helm_release" "kubernetes_agent" { name = "example-kubernetes-agent" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/kubernetes-agent" version = "2.*.*" atomic = true create_namespace = true namespace = "octopus-agent-example" set = [ { name = "agent.acceptEula" value = "Y" }, { name = "agent.name" value = octopusdeploy_kubernetes_agent_deployment_target.example.name }, { name = "agent.serverUrl" value = local.octopus_address }, { name = "agent.serverCommsAddress" value = local.octopus_polling_address }, { name = "agent.serverSubscriptionId" value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri }, { name = "agent.space" value = octopusdeploy_space.monitoring.name }, { name = "agent.deploymentTarget.enabled" value = "true" }, # Kubernetes monitor configuration (optional) { name = "kubernetesMonitor.enabled" value = "true" }, { name = "kubernetesMonitor.registration.register" value = "false" }, { name = "kubernetesMonitor.monitor.serverGrpcUrl" value = local.octopus_grpc_address }, { name = "kubernetesMonitor.monitor.installationId" value = octopusdeploy_kubernetes_monitor.example.installation_id }, { name = "kubernetesMonitor.monitor.serverThumbprint" value = octopusdeploy_kubernetes_monitor.example.certificate_thumbprint } ] set_sensitive = [ { name = "agent.serverApiKey" value = local.octopus_api_key }, { name = "agent.certificate" value = octopusdeploy_tentacle_certificate.agent_cert.base64 }, { name = "kubernetesMonitor.monitor.authenticationToken" value = octopusdeploy_kubernetes_monitor.example.authentication_token } ] set_list = [ { name = "agent.deploymentTarget.initial.environments" value = octopusdeploy_kubernetes_agent_deployment_target.example.environments }, { name = "agent.deploymentTarget.initial.tags" value = octopusdeploy_kubernetes_agent_deployment_target.example.roles } ] } ``` ### Helm The Kubernetes Agent can be installed using just the Helm provider alone. However, the associated deployment target that is created in Octopus cannot be managed solely using the Helm provider. This is because the Helm chart values relating to the agent are only used on initial installation. Any further modifications to them will not trigger an update to the deployment target unless you perform a complete reinstall of the agent. If you don't intend to manage the Kubernetes Agent configuration through Terraform (choosing to handle it via the Octopus Portal or API instead), this option will be beneficial to you as it is simpler to set up. ```ruby terraform { required_providers { helm = { source = "registry.terraform.io/hashicorp/helm" version = "3.1.1" } } } locals { octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://myinstance.octopus.app" octopus_grpc_address = "https://myinstance.octopus.app:8443" octopus_polling_address = "https://polling.myinstance.octopus.app" } provider "helm" { kubernetes = { # Configure authentication for me } } # Install the Kubernetes agent and monitor via Helm resource "helm_release" "kubernetes_agent" { name = "example-kubernetes-agent" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/kubernetes-agent" version = "2.*.*" atomic = true create_namespace = true namespace = "octopus-agent-example" set = [ { name = "agent.acceptEula" value = "Y" }, { name = "agent.name" value = "octopus-agent" }, { name = "agent.serverUrl" value = local.octopus_address }, { name = "agent.serverCommsAddress" value = local.octopus_polling_address }, { name = "agent.space" value = "Default" }, { name = "agent.deploymentTarget.enabled" value = "true" } ] set_sensitive = [ { name = "agent.serverApiKey" value = local.octopus_api_key } ] set_list = [ { name = "agent.deploymentTarget.initial.environments" value = ["Development"] }, { name = "agent.deploymentTarget.initial.tags" value = ["k8s-agent"] } ] } ``` # OpenShift Kubernetes cluster Source: https://octopus.com/docs/kubernetes/targets/kubernetes-api/openshift.md [OpenShift](https://www.openshift.com/) is a popular Kubernetes (K8s) management platform by Red Hat. OpenShift provides an interface to manage and deploy containers to your K8s cluster as well as centrally manage security. The OpenShift Container Platform rides on top of standard Kubernetes, this means that it can easily be integrated with Octopus Deploy as a deployment target. ## Authentication To connect your OpenShift K8s cluster to Octopus Deploy, you must first create a means to authenticate with. We recommend that you create a [Service Account](https://docs.openshift.com/container-platform/4.4/authentication/understanding-and-creating-service-accounts.html) for Octopus Deploy to use. :::div{.hint} Service Accounts in OpenShift are project specific. You will need to create a Service Account per project (namespace) for Octopus Deploy in OpenShift. ::: ### Create service account Each project within OpenShift has a section where you can define service accounts. After your project has been created: - Expand **User Management**. - Click **Service Accounts**. - Click **Create Service Account**. ### Create role binding The Service Account will need to have a role so it can create resources on the cluster. In this example, the Service Account `octopusdeploy` is granted the role `cluster-admin` for the currently logged in project: ``` C:\Users\Shawn.Sesna\.kube>oc.exe policy add-role-to-user cluster-admin -z octopusdeploy ``` ### Service Account Token OpenShift will automatically create a Token for your Service Account. This Token is how the Service Account authenticates to OpenShift from Octopus Deploy. To retrieve the value of the token: - Click Service Accounts. - Click octopusdeploy (or whatever you named yours). - Scroll down to the Secrets section. - Click on the entry that has the `type` of `kubernetes.io/service-account-token`. :::figure ![OpenShift Service Account](/docs/img/infrastructure/deployment-targets/kubernetes-target/openshift/openshift-service-account-secrets.png) ::: Copy the Token value by clicking on the copy to clipboard icon on the right hand side. #### Getting the cluster URL To add OpenShift as a deployment target, you need the URL to the cluster. The `status` argument for the `oc.exe` command-line tool will display the URL of the OpenShift K8s cluster: ``` C:\Users\Shawn.Sesna\.kube>oc.exe status In project testproject on server https://api.crc.testing:6443 ``` #### Project names are Namespaces When you create projects within OpenShift, you are creating Namespaces in the K8s cluster. The project name of your project is the Namespace within the K8s cluster. ## Connecting an OpenShift Kubernetes Deployment Target Adding an OpenShift K8s target is done in exactly the same way you would add any other [Kubernetes target](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/#add-a-kubernetes-target). # Rancher Kubernetes cluster Source: https://octopus.com/docs/kubernetes/targets/kubernetes-api/rancher.md [Rancher](http://www.rancher.com) is a Kubernetes (K8s) cluster management tool that can be used to manage K8s clusters on local infrastructure, cloud infrastructure, and even cloud managed K8s services. Not only can Rancher be used to centrally manage all of your K8s clusters, it can also be used to provide a central point for deployment, proxying commands through Rancher to the K8s clusters it manages. This provides the advantage of managing access to your K8s clusters without having to add users to the clusters individually. ## Authentication Before you can add your Rancher managed cluster, you must first create a means of authenticating to it. This can be accomplished using the Rancher UI to create an access key. 1. Log into Rancher, then click on your profile in the upper right-hand corner. 1. Select **API & Keys**. 1. Click on **Add Key**. 1. Give the API Key an expiration and a scope. 1. We recommend adding a description so you know what this key will be used for, then click **Create**. After you click create, you will be shown the API Key information: - Access Key (username): Used for Username/Password accounts in Octopus Deploy. - Secret Key (password): Used for Username/Password accounts in Octopus Deploy. - Bearer Token: Used for Token accounts in Octopus Deploy. **Save this information, you will not be able to retrieve it later.** ## Rancher cluster endpoints As previously mentioned, you can proxy communication to your clusters through Rancher. Instead of connecting to the individual K8s API endpoints directly, you can use API endpoints within Rancher to issue commands. The format of the URL is as follows: `https:///k8s/clusters/`. A quick way to find the correct URL is to grab it from the provided Kubeconfig file information. For each cluster you define, Rancher provides a *Kubeconfig file* that can be downloaded directly from the UI. To find it, select the cluster you need from the Global dashboard, and click the **Kubeconfig File** button: :::figure ![Rancher Kubeconfig file](/docs/img/infrastructure/deployment-targets/kubernetes-target/rancher/rancher-kubeconfig-file.png) ::: The next screen has the Kubeconfig file which contains the specific URL you need to use to connect your cluster to Octopus Deploy: :::figure ![Rancher cluster URL](/docs/img/infrastructure/deployment-targets/kubernetes-target/rancher/rancher-cluster-url.png) ::: ## Add the account to Octopus Deploy In order for Octopus Deploy to deploy to the cluster, it needs credentials to log in with. In the Octopus Web Portal, navigate to the **Infrastructure** tab and click **Accounts**. Use one of the methods Rancher provided you with, *Username and Password* or *Token*. 1. Click **ADD ACCOUNT**. 1. Select which account type you want to create. 1. Enter the values for your selection, then click **SAVE**. ## Connecting a Rancher Kubernetes Deployment Target Adding a Rancher K8s target is done in exactly the same way you would add any other [Kubernetes target](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/#add-a-kubernetes-target). The only Rancher specific component is the URL. Other than that, the process is exactly the same. # Static IP address Source: https://octopus.com/docs/octopus-cloud/static-ip.md Octopus Cloud is a multi-tenant service with several static IP addresses shared among customers in the same Azure region. Each Azure region uses a range of static IP addresses. The static IP address for an Octopus Cloud instance will be one from the range and may change to another static IP address within the range under certain situations. With a static IP address, you can lock down the ingress and egress communications between a Tentacle in your infrastructure and your Octopus Cloud instance. :::div{.hint} **Note:** The Octopus-hosted [Dynamic Workers](/docs/infrastructure/workers/dynamic-worker-pools) do not fall within the static IP range of your Octopus Cloud instance. If a known/static IP is required for your worker, please consider provisioning your own [external worker](/docs/infrastructure/workers#external-workers-external-workers). ::: The range of IP Addresses that your Octopus Cloud instance will use is listed in the technical section of the instance details page. 1. Log in to [Octopus.com](https://octopus.com). 1. Select your cloud instance. 1. Click **Configuration**. 1. Scroll down to the **Static IP addresses** section, and you will see the static IP addresses your instance can use. # octopus config list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-config-list.md List values from config file. ```text Usage: octopus config list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus config list" ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # API examples Source: https://octopus.com/docs/octopus-rest-api/examples.md As you work with the Octopus API, you may need some guidance on how to perform certain actions or what parameters to provide. The [OctopusDeploy-API GitHub repository](https://github.com/OctopusDeploy/OctopusDeploy-Api) contains many examples using the API, with solutions covering: - PowerShell using the REST API. - PowerShell using Octopus.Client. - C# using Octopus.Client. - Python using the REST API. - Go using the [Go API Client for Octopus Deploy](https://github.com/OctopusDeploy/go-octopusdeploy). - TypeScript using the [TypeScript API Client for Octopus Deploy](https://github.com/OctopusDeploy/api-client.ts). In addition, we also have a wide range of some of the more common examples here as well. ## Using the scripts To use the example scripts, you'll need to provide your Octopus Server URL and an [API Key](/docs/octopus-rest-api/how-to-create-an-api-key). There may be other values that need to be updated to fit your scenario such as Space, Project, and Environment names. :::div{.hint} **The examples provided are for reference and should be modified and tested prior to using in a production Octopus instance.** ::: ### C# examples The C# examples are written using [dotnet script](https://github.com/filipw/dotnet-script). The same logic can be used in a standard C# application. ### Octopus.Client examples Examples using [Octopus.Client](/docs/octopus-rest-api/octopus.client) require the library to be installed and a path to the library to be provided. ### Python examples The Python examples are written using **Python 3** and use the [Requests](https://requests.readthedocs.io/en/master/) library. Some examples also use the [urllib](https://docs.python.org/3/library/urllib.html) module. ### Go examples The Go examples are written using the [Go API Client for Octopus Deploy](https://github.com/OctopusDeploy/go-octopusdeploy). ### Java examples The Java examples are written using the [java-octopus-deploy](https://github.com/OctopusDeployLabs/java-octopus-deploy) Client. The Java Client library requires **Java 1.8** or above. ### TypeScript examples The TypeScript examples are written using the [TypeScript API Client for Octopus Deploy](https://github.com/OctopusDeploy/api-client.ts). ## Bulk operations Sometimes you want to perform an action on a resource in Octopus multiple times. For example, connecting a tenant to all of your projects. Having to run a script that performs an operation once, repeatedly, can become tedious. To help with this, we've included examples of [bulk operations](/docs/octopus-rest-api/examples/bulk-operations) using the Octopus REST API. ## Explore examples Explore the REST API examples further in this section: - [Accounts](/docs/octopus-rest-api/examples/accounts) - [Artifacts](/docs/octopus-rest-api/examples/artifacts) - [Certificates](/docs/octopus-rest-api/examples/certificates) - [Channels](/docs/octopus-rest-api/examples/channels) - [Deployment process](/docs/octopus-rest-api/examples/deployment-process) - [Deployment targets](/docs/octopus-rest-api/examples/deployment-targets) - [Deployments](/docs/octopus-rest-api/examples/deployments) - [Environments](/docs/octopus-rest-api/examples/environments) - [Events](/docs/octopus-rest-api/examples/events) - [Feeds](/docs/octopus-rest-api/examples/feeds) - [Lifecycles](/docs/octopus-rest-api/examples/lifecycles) - [Project Groups](/docs/octopus-rest-api/examples/project-groups) - [Projects](/docs/octopus-rest-api/examples/projects) - [Releases](/docs/octopus-rest-api/examples/releases) - [Reports](/docs/octopus-rest-api/examples/reports) - [Runbooks](/docs/octopus-rest-api/examples/runbooks) - [Spaces](/docs/octopus-rest-api/examples/spaces) - [Step Templates](/docs/octopus-rest-api/examples/step-templates) - [Tag sets](/docs/octopus-rest-api/examples/tagsets) - [Tasks](/docs/octopus-rest-api/examples/tasks) - [Tenants](/docs/octopus-rest-api/examples/tenants) - [Users and Teams](/docs/octopus-rest-api/examples/users-and-teams) - [Variables](/docs/octopus-rest-api/examples/variables) - [Bulk Operations](/docs/octopus-rest-api/examples/bulk-operations) ## Get help from the community If you're looking for help with API scripts or want to share your own, join the [Octopus Community Slack channel](https://octopus.com/slack). It's a great place to get inspiration, ask questions, and connect with other Octopus users and employees. # Channels Source: https://octopus.com/docs/octopus-rest-api/examples/channels.md You can use the REST API to create and manage your [channels](/docs/releases/channels) in Octopus. Typical tasks can include: - [Create a channel](/docs/octopus-rest-api/examples/channels/create-channel) # Working with Spaces Source: https://octopus.com/docs/octopus-rest-api/octopus.client/working-with-spaces.md Working with anything other than the default space in the Octopus.Client library requires specifying the target space. There are two methods of specifying the target space with Octopus.Client. The first is the `OctopusClient.ForSpace` method:
PowerShell ```powershell # Create endpoint and client $endpoint = New-Object Octopus.Client.OctopusServerEndpoint("https://your-octopus-url", "API-YOUR-KEY") $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get default repository and get space by name $repository = $client.ForSystem() $space = $repository.Spaces.FindByName("Space Name") # Get space specific repository and get all projects in space $repositoryForSpace = $client.ForSpace($space) $projects = $repositoryForSpace.Projects.GetAll() ```
C# ```csharp C# // Create endpoint and client var endpoint = new OctopusServerEndpoint("https://your-octopus-url", "API-YOUR-KEY"); var client = new OctopusClient(endpoint); // Get default repository and get space by name var repository = client.ForSystem(); var space = repository.Spaces.FindByName("Space Name"); // Get space specific repository and get all projects in space var repositoryForSpace = client.ForSpace(space); var projects = repositoryForSpace.Projects.GetAll(); ```
The other method is `OctopusRepositoryExtensions.ForSpace`:
PowerShell ```powershell # Create endpoint and repository $endpoint = New-Object Octopus.Client.OctopusServerEndpoint("https://your-octopus-url", "API-YOUR-KEY") $repository = New-Object Octopus.Client.OctopusRepository($endpoint) # Get space by name $space = $repository.Spaces.FindByName("Space Name") # Get space specific repository and get all projects in space $repositoryForSpace = [Octopus.Client.OctopusRepositoryExtensions]::ForSpace($repository, $space) $projects = $repositoryForSpace.Projects.GetAll() ```
C# ```csharp // Create endpoint and repository var endpoint = new OctopusServerEndpoint("https://your-octopus-url", "API-YOUR-KEY"); var repository = new OctopusRepository(endpoint); // Get space by name var space = repository.Spaces.FindByName("Space Name"); // Get space specific repository and get all projects in space var repositoryForSpace = repository.ForSpace(space); var projects = repositoryForSpace.Projects.GetAll(); ```
# Database Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/database.md Use the database command to create or drop the Octopus database. **Database options** ``` Usage: octopus.server database [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --create Creates a new empty database, and upgrades the database to the expected schema --connectionString=VALUE [Optional] Sets the database connection string to use, and saves it in the config file. If omitted, the value from the config file will be used to perform operations on the database. --masterKey=VALUE [Optional] Sets the Master Key to use when encrypting/decrypting data. If omitted, the value from the config file will be used. Use this option when pointing to an existing database that uses an existing Master Key. --upgrade Upgrades the database to the expected schema --skipLicenseCheck Skips the license check when performing a schema upgrade --skipDatabaseCompatibilityCheck Skips the database compatibility check --delete Deletes the database --grant=VALUE Grants db_owner access to the database Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example creates a new database for the `MyNewInstance` instance. This example expects the database for instance `MyNewInstance` not to exist. If it does, it will say it already exists and won't do anything: ``` octopus.server database --create --instance="MyNewInstance" ``` ## Create database with supplied master key This example creates a new database for the `MyInstance` instance using a supplied master key. 1. First, create a master key using the `openssl` command: ```bash openssl rand 16 | base64 ``` The output should be a base64 encoded string, similar to this: ```bash DVNbYEHJ9hmmH7YsVfLJQw== ``` 2. Use the database command with the `--masterKey` parameter, replacing `` with your generated value: ``` octopus.server database --create --instance="MyInstance" --connectionString "" --masterKey "" # BitBucket Pipelines Source: https://octopus.com/docs/packaging-applications/build-servers/bitbucket-pipelines.md [Bitbucket](https://bitbucket.org/) is a git-based source-control platform made by Atlassian that serves as an alternative to GitHub with free unlimited private repos. [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) is Atlassian's cloud-based continuous integration server, built using pre-configured Docker containers. :::div{.warning} As Bitbucket Pipelines is only available as a cloud offering, your Octopus Server must be accessible over the Internet. ::: ## Integrating with Bitbucket Pipelines When using Octopus Deploy with BitBucket, BitBucket Pipelines will be responsible for: - Checking for changes in source control. - Compiling the code. - Running unit tests. - Creating packages for deployment. Octopus Deploy will be used to take those packages and to push them to development, test, and production environments. Octopus Deploy can be integrated with BitBucket Pipelines in two ways: - Using the up-to-date [Octopus CLI Docker image](https://hub.docker.com/r/octopusdeploy/octo/) of the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) command-line tool. - Using the new **experimental** BitBucket Pipe called [octopus-cli-run](https://bitbucket.org/octopusdeploy/octopus-cli-run/src/master/README/). :::div{.warning} **Experimental Pipe:** The `octopus-cli-run` Bitbucket Pipe is currently experimental. If you want to try the latest integration, and only need to use some of the more commonly used commands, for example, to manage your packages, releases, and deployments, then using the experimental Pipe might be the right choice for you. However, if you need full control over integrating your Bitbucket Pipeline with Octopus, the pre-configured CLI Docker image is the recommended method to do that. ::: ## BitBucket Pipelines environment variables You can use [environment variables](https://confluence.atlassian.com/bitbucket/variables-in-pipelines-794502608.html) in your Pipelines (available from the **Settings ➜ Environment Variables** menu of your BitBucket repository), which is a great place to store sensitive information such as your Octopus Deploy API keys (which is ideally not something you store in your source control). For example: | Variable name | Description| | ------------- | ------- | | OCTOPUS_SERVER | The Octopus Server URL you wish to push the final package to | | OCTOPUS_APIKEY | The Octopus Deploy API Key required for authentication | ## BitBucket Pipeline configuration When you enable BitBucket Pipelines for your repository, BitBucket stores all the information it requires into a `bitbucket-pipelines.yml` file in the base of your repository. This is the file we need to modify to run our build, pack and/or push package commands. ### Docker CLI Example of packing and pushing Here's an example pipeline step that demonstrates using the `octo` CLI Docker image, which packs the current state of your repository into a zip file and then pushes that package to Octopus Deploy. ```yaml pipelines: default: - step: name: Deploy to Octopus image: octopusdeploy/octo:6.17.3-alpine script: - export VERSION=1.0.$BITBUCKET_BUILD_NUMBER - octo pack --id $BITBUCKET_REPO_SLUG --version $VERSION --outFolder ./out --format zip - octo push --package ./out/$BITBUCKET_REPO_SLUG.$VERSION.zip --server $OCTOPUS_SERVER --apiKey $OCTOPUS_APIKEY ``` ### Pipe example of packing and pushing To show how you can achieve the same pack and push commands as above, here's an example pipeline step, but this time using the `octopus-cli-run` Bitbucket Pipe. ```yaml - step: name: octo pack + push script: - pipe: octopusdeploy/octopus-cli-run:0.38.0 variables: CLI_COMMAND: 'pack' ID: $BITBUCKET_REPO_SLUG FORMAT: 'Zip' VERSION: $VERSION OUTPUT_PATH: './out' - pipe: octopusdeploy/octopus-cli-run:0.38.0 variables: CLI_COMMAND: 'push' OCTOPUS_SERVER: $OCTOPUS_SERVER OCTOPUS_APIKEY: $OCTOPUS_API_KEY OCTOPUS_SPACE: $OCTOPUS_SPACE PACKAGES: [ "./out/$BITBUCKET_REPO_SLUG.$VERSION.zip" ] ``` :::div{.success} **Example Bitbucket Pipeline with octopus-cli-run Pipe:** View a working Pipeline example on our [samples Bitbucket repository](https://bitbucket.org/octopussamples/petclinic/addon/pipelines/home#!/). See the corresponding Octopus project on our [samples instance](https://samples.octopus.app/app#/Spaces-85/projects/petclinic/). ::: ## Learn more - [Bitbucket feature documentation](https://bitbucket.org/product/features/pipelines) - [Bitbucket Pipe for Octopus Deploy: octopus-cli-run](https://octopus.com/blog/octopus-bitbucket-pipe) - [Bitbucket Pipelines: Pipes and integrating with Octopus Deploy](https://octopus.com/blog/bitbucket-pipes-and-octopus-deploy) - [Webinar: Integrating your Atlassian Cloud Pipeline with Octopus Deploy](https://youtube.com/embed/yPjooXDJUA0) # Create packages with OctoPack Source: https://octopus.com/docs/packaging-applications/create-packages/octopack.md You can package full framework .NET applications from your continuous integration/automated build process with OctoPack. OctoPack adds a custom MSBuild target that hooks into the build process of your solution. When enabled, OctoPack will package your console application projects, Windows Service projects, and ASP.NET web applications when MSBuild runs. OctoPack works by calling `nuget.exe pack` to build the NuGet package, and `nuget.exe push` to publish the package (if so desired). OctoPack understands .NET applications and uses that knowledge to build the right kind of package for each kind of .NET application. :::div{.warning} **OctoPack and .NET Core** OctoPack is not compatible with .NET Core applications. If you want to package .NET Core applications see [create packages with the Octopus CLI](/docs/packaging-applications/create-packages/octopus-cli). ::: ## Install OctoPack OctoPack is a NuGet package that you can install using the NuGet package installer or however you prefer to install NuGet packages. OctoPack should only be installed on projects that you are going to deploy, that means your console application projects, Windows Service projects, and ASP.NET web applications. Do not install OctoPack on unit tests, class libraries, or other supporting projects. ## Build packages Set the `RunOctoPack` MSBuild property to true, and OctoPack will create a NuGet package from your build. For example, if you are compiling from the command line, you might use: ```powershell msbuild MySolution.sln /t:Build /p:RunOctoPack=true ``` After the build completes, you will find a NuGet package in the output directory. This package is ready to be deployed with Octopus. See [Package Deployments](/docs/deployments/packages). ## Add a NuSpec file A `.nuspec` file describes the contents of your NuGet package. You can provide your own simple `.nuspec` file to your project. When MSBuild is invoked OctoPack tries to establish the name of your NuSpec file using these rules: 1. OctoPack will look for a variable called `OctoPackNuSpecFileName` to use as the NuSpec file. 1. If that isn't defined, OctoPack tries to find one based on your project name: - OctoPack will look for a variable called `OctoPackProjectName` to use as the NuSpec file. - If that isn't defined, OctoPack uses the project name. For example `Sample.Web.nuspec` if your project is named `Sample.Web`. Note: The `.nuspec` file needs to be in the same directory as your `.csproj` file. :::div{.hint} If you don't provide a NuSpec file, OctoPack will create one by guessing some of the settings from your project. ::: Here is an example `.nuspec` file: ```xml Sample.Web Your Web Application 1.0.0 Your name Your name http://yourcompany.com http://yourcompany.com false A sample project This release contains the following changes... ``` If you have an existing `.nuspec` file but you want the generated Octopus package name to be different from the `.nuspec` filename, you can use [NuGet replacement tokens](#nuget-replacement-tokens). For example, the `id` property in the `.nuspec` could be set as follows: ```xml $packageId$ ``` Then you would pass the package id you wanted as part of the `OctoPackNuGetProperties` MSBuild parameter: ``` /p:OctoPackNuGetProperties=packageid=YOUR-PACKAGE-ID ``` Remembering to replace `YOUR-PACKAGE-ID` with the id for your package. ### Include additional files with your NuSpec file If you need to include additional files, or you want to explicitly control which files are included in the package, you can do so by adding a `` element to your custom `.nuspec` file. If the `` section exists, by default OctoPack will not attempt to automatically add any extra files to your package, so you need to explicitly list which files you want to include. You can override this behavior with `/p:OctoPackEnforceAddingFiles=true` which will instruct OctoPack to package a combination of files using its conventions and those defined by your `` section: ```xml ``` ### NuGet replacement tokens You can use NuGet replacement tokens inside your NuSpec file: ```xml Sample.$suffix$ $title$ $version$ $myname$ Your name http://yourcompany.com http://yourcompany.com false A sample project This release contains the following changes... ``` To set a value for these parameters, use the MSBuild property OctoPackNuGetProperties: ```powershell msbuild MySolution.sln /t:Build /p:RunOctoPack=true "/p:OctoPackNuGetProperties=suffix=release;title=My Title;version=1.0.0;myname=Paul" ``` Learn more about the [NuSpec file format](http://docs.nuget.org/docs/reference/nuspec-reference). ## What is packaged? OctoPack only packages the files in your .Net applications that are required to deploy the application. If you are packaging a .NET application, OctoPack will **automatically package all files in the build output directory for the project**. In most cases this will be the `bin`, `bin\Debug`, or `bin\Release` folder, depending on the build configuration and whether you have [changed the build output directory for your project in Visual Studio](https://msdn.microsoft.com/en-us/library/ms165410.aspx). If you have customized the output directory, and you have added a custom files element to your custom nuspec file, the paths you specify must be relative to the nuspec file's location. This means that for the binaries files that are being built by the project you will have to use some combination of `..\` style prefix to refer to the assemblies. For Windows Service or Console applications, and many Windows Forms or WPF applications, the build output directory contains everything you need to deploy your application. The example below shows a Windows Service called `OctoFX.RateService.exe` and all files required to run the application, including libraries and configuration files. :::figure ![An example of a Windows Service package](/docs/img/packaging-applications/create-packages/octopack/images/sample-package.png) ::: ## Include web application content files Web applications require additional files to run, such as Razor/ASPX files, configuration files, and assets such as images, CSS, and JavaScript files. OctoPack automatically determines whether a project is a web application or not based on whether it finds a `web.config` file. When packaging a web application, OctoPack will automatically include the `bin` folder and any files configured with `Build Action: Content`. You can see **Build Action** in the Solution Explorer properties window for the currently selected file in Visual Studio: :::figure ![](/docs/img/packaging-applications/create-packages/octopack/images/build-action.png) ::: The example below shows a web app called **OctoFX.TradingWebsite**. All the files required to host the web app have been packaged, including the contents of the `bin` folder and any files with **Build Action: Content**: :::figure ![Sample Package of a Web App](/docs/img/packaging-applications/create-packages/octopack/images/sample-web-app-package.png) ::: ### .NET configuration transformation OctoPack won't run web.config transformation files, because these will be run as [part of the deployment instead](/docs/projects/steps/configuration-features/configuration-transforms). Make sure you set **Build Action: Content** for your .NET configuration transform files (like `web.Release.config`) to ensure these files are packaged and used as part of your deployment. ### .NET XML configuration transforms You can use [.NET XML configuration transforms](/docs/projects/steps/configuration-features/xml-configuration-variables-feature/) on any XML files including the `app.config` file for Windows Service, Console, Windows Forms, or WPF applications. Make sure the transform files are copied to the build output directory as part of your build, and they will be packaged by OctoPack so you can use them as part of your [deployment](/docs/projects/steps/configuration-features). ## Include additional files using copy to output directory If you need to include other files in your package for deployment, you can use the Visual Studio properties panel to set the `Copy to Output Directory` attribute to `Copy if newer` or `Copy always`. These files will be copied to the build output directory when the project builds, and subsequently packaged by OctoPack. ## Version numbers {#version-numbers} NuGet packages have version numbers. When you use OctoPack, the NuGet package version number will come from (in order of priority): 1. The command line, if you pass `/p:OctoPackPackageVersion=` as an MSBuild parameter when building your project. 2. If the assembly contains a `GitVersionInformation` type, the field `GitVersionInformation.NuGetVersion` is used. 3. If you pass `/p:OctoPackUseProductVersion=true` as an MSBuild parameter, `[assembly: AssemblyInformationalVersion]` (AKA Assembly's product version) is used. 4. If you pass `/p:OctoPackUseFileVersion=true` as an MSBuild parameter, `[assembly: AssemblyFileVersion]` (AKA Assembly's file version) is used. 5. If the `[assembly: AssemblyInformationalVersion]` value is not valid, the `[assembly: AssemblyFileVersion]` is used. 6. If the `[assembly: AssemblyFileVersion]` is the same as the `[assembly: AssemblyInformationalVersion]` (AKA ProductVersion), then we'll use the `[assembly: AssemblyVersion]` attribute in your `AssemblyInfo.cs` file. 7. Otherwise, we take the `[assembly: AssemblyInformationalVersion]`. During the build, messages are output at the `Normal` msbuild logging level which may help diagnose version retrieval problems. ### Version numbers are preserved as-is NuGet 3 started removing leading zeros and the fourth digit if it's zero. These are affectionately known as "NuGet zero quirks" and can be surprising when working with tooling outside the NuGet ecosystem. We have made a choice to preserve the version as-is when working with Octopus tooling to create packages of any kind. Learn more about [versioning in Octopus Deploy](/docs/packaging-applications/create-packages/versioning). To make this work for NuGet packages we have forked NuGet. The fork of NuGet 3 available here: https://github.com/OctopusDeploy/NuGet.Client The packages are available here: https://octopus.myget.org/feed/octopus-dependencies/package/nuget/NuGet.CommandLine ## Add release notes NuSpec files can contain release notes, which show up on the Octopus Deploy release page. OctoPack can add these notes to your NuGet package if you pass a path to a file containing the notes. For example: ```powershell msbuild MySolution.sln /t:Build /p:RunOctoPack=true /p:OctoPackReleaseNotesFile=..\ReleaseNotes.txt ``` Note that the file path should always be relative to the C#/VB project file not the solution file. ## Publish your package To publish your package to a NuGet feed, you can optionally use some extra MSBuild properties: - `/p:OctoPackPublishPackageToFileShare=C:\MyPackages`: copies the package to the path given. - `/p:OctoPackPublishPackageToHttp=http://my-nuget-server/api/v2/package`: pushes the package to the NuGet server. - `/p:OctoPackPublishApiKey=YOUR-KEY`: API key to use when publishing. - `/p:OctoPackAppendProjectToFeed=true`: Append the project name onto the feed so you can nest packages under folders on publish. - `/p:OctoPackAppendToPackageId=foo`: Append the extra name to the package ID (e.g. for feature branch packages). MyApp.Foo.1.2.3.nupkg. ## Push your packages to the Octopus built-in repository Octopus provides a [built-in package repository](/docs/packaging-applications/package-repositories/) for your deployment packages. The Octopus built-in repository is generally the best choice for deployment packages because it offers better performance and most suitable [retention policies](/docs/administration/retention-policies). To push your packages to the Octopus built-in repository use the following settings: - `/p:OctoPackPublishPackageToHttp=http://your.octopusserver.com/nuget/packages`: this is the URL to your Octopus Server noting the `/nuget/packages` path. - `/p:OctoPackPublishApiKey=API-YOUR-KEY`: the [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key) you want to use for pushing packages noting [these security considerations](/docs/packaging-applications/package-repositories/built-in-repository/#security-considerations). ## Push a NuGet package that already exists When pushing to the [built-in Octopus package repository](/docs/packaging-applications/package-repositories/) using [OctoPack](/docs/packaging-applications/create-packages/octopack) or [NuGet.exe](https://docs.microsoft.com/en-us/nuget/tools/nuget-exe-cli-reference), the default URL looks like this: `http://MyOctopusServer/nuget/packages` If a package with the same version already exists, the server will usually reject it with a 400 error. This is because each time you change an application, you should produce a new version of each NuGet package. Usually, customers set up their CI builds to automatically increment the package version number (e.g., 1.1.1, 1.1.2, 1.1.3, and so on). Sometimes the package version number don't always change. This can happen if you are building a solution containing many projects, and only one project has changed. If this is the case, and only one project has changed, you can modify the URL to include a `?replace=true` parameter like this: `http://MyOctopusServer/nuget/packages?replace=true` This will force the Octopus Server to replace the existing NuGet package with the new version you have pushed. It works exactly the same as the check-box on the package upload pane: :::figure ![](/docs/img/packaging-applications/create-packages/octopack/images/existing-package.png) ::: ## All supported parameters In addition to the common arguments above, OctoPack has a number of other parameters. The full list is documented in the table below. | Parameter | Example value | Description | | -------------------------------------- | --------------------------------------- | ---------------------------------------- | | `RunOctoPack` | `True` | Set to `True` for OctoPack to run and create packages during the build. Default: OctoPack won't run. | | `OctoPackPackageVersion` | `1.0.0` | Version number of the NuGet package. By default, OctoPack gets the version from your assembly version attributes. Set this parameter to use an explicit version number. | | `OctoPackAppConfigFileOverride` | `Foo.config` | When packaging a project called YourApp, containing a file named `App.config`, OctoPack will automatically ignore it, and instead look for `YourApp.exe.config`. Provide this setting to have OctoPack select your specified config file, instead. | | `OctoPackAppendToPackageId` | `Release` | A fragment that will be appended to the NuGet package ID, allowing you to create different NuGet packages depending on the build configuration. E.g., if the ID element in the NuSpec is set to "`MyApp`", and this parameter is set to "`Release`", the final package ID will be "`MyApp.Release`". | | `OctoPackAppendToVersion` | `beta025` | Define a pre-release tag to be appended to the end of your package version. | | `OctoPackEnforceAddingFiles` | `True` | By default, when your NuSpec file has a `` element, OctoPack won't automatically add any of the other files that it would usually add to the package. Set this parameter to `true` to force OctoPack to add all the files it would normally add. | | `OctoPackIgnoreNonRootScripts` | `True` | Octopus Deploy only calls `Deploy.ps1` files etc., that are at the root of the NuGet package. If your project emits `Deploy.ps1` files that are not at the root, OctoPack will usually warn you when packaging these. Set this parameter to `true` to suppress the warning. | | `OctoPackIncludeTypeScriptSourceFiles` | `True` | If your project has TypeScript files, OctoPack will usually package the corresponding `.js` file produced by the TypeScript compiler, instead of the `.ts` file. Set this parameter to `true` to force OctoPack to package the `.ts` file instead. | | `OctoPackNuGetArguments` | `-NoDefaultExcludes` | Use this parameter to specify additional command line parameters that will be passed to `NuGet.exe pack`. See the [NuGet pack command description](http://docs.nuget.org/docs/reference/command-line-reference#Pack_Command). | | `OctoPackNuGetExePath` | `C:\Tools\NuGet.exe` | OctoPack comes with a bundled version of `NuGet.exe`. Use this parameter to force OctoPack to use a different `NuGet.exe` instead. | | `OctoPackNuGetProperties` | `foo=bar;baz=bing` | If you use replacement tokens in your NuSpec file (e.g., `$foo$`, `$bar$`, `$version$`, etc.), this parameter allows you to set the value for those tokens. See the section above on replacement tokens, and see the [NuSpec reference for details on replacement tokens](http://docs.nuget.org/docs/reference/nuspec-reference#Replacement_Tokens). | | `OctoPackNuGetPushProperties` | `-Timeout 500` | Additional arguments that will be passed to `NuGet.exe push` if you are pushing to an HTTP/HTTPS NuGet repository. See the [NuGet push command description](http://docs.nuget.org/docs/reference/command-line-reference#Push_Command). | | `OctoPackNuSpecFileName` | `MyApp.nuspec` | The NuSpec file to use. Defaults to `".nuspec"`. If the file doesn't exist, OctoPack generates a NuSpec based on your project metadata. | | `OctoPackPublishApiKey` | `API-YOUR-KEY` | Your API key to use when publishing to a HTTP/HTTPS based NuGet repository | | `OctoPackPublishPackagesToTeamCity` | `False` | By default, if OctoPack detects that the build is running under TeamCity, the NuGet package that is produced is registered as an artifact in TeamCity. Use this parameter to suppress this behavior. | | `OctoPackPublishPackageToFileShare` | `\\server\packages` | OctoPack can publish packages to a file share or local directory after packaging | | `OctoPackPublishPackageToHttp` | `http://my-nuget-server/api/v2/package` | OctoPack can publish packages to a HTTP/HTTPS NuGet repository (or the [Octopus built-in repository](/docs/packaging-applications/package-repositories)) after packaging. | | `OctoPackReleaseNotesFile` | `my-release-notes.txt` | Use this parameter to have the package release notes read from a file. | | `OctoPackProjectName` | `YourProjectName` | Use this parameter to override the name of your package, so it's not necessarily identical to your Visual Studio Project. This will only work when building a single Project/Package. For multiple projects you do not use this parameter and instead set the below property on your project's csproj file `Foo` | | `OctoPackUseFileVersion` | `true` | Use this parameter to use `[assembly: AssemblyFileVersion]` (Assembly File Version) as the package version (see [version numbers](#version-numbers)) | | `OctoPackUseProductVersion` | `true` | Use this parameter to use `[assembly: AssemblyInformationalVersion]` (Assembly Product Version) as the package version (see [version numbers](#version-numbers)). Introduced in OctoPack `3.5.0` | | `OctoPackAppendProjectToFeed` | `true` | Append the project name onto the feed so you can nest packages under folders on publish | ## Learn more - Use [OctoPack to include BuildEvent files](/docs/packaging-applications/create-packages/octopack/octopack-to-include-buildevent-files) - [Troubleshooting OctoPack](/docs/packaging-applications/create-packages/octopack/troubleshooting-octopack) - [Package deployments](/docs/deployments/packages) # Cloudsmith Multi-format repositories Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/cloudsmith-feed.md [Cloudsmith](https://www.cloudsmith.com) is a fully managed package management as a service that securely hosts all of your packages, in any format that you need, including NuGet, Helm, Docker, Maven, or NPM in one location and accessible across your organization. :::div{.hint} All Cloudsmith repositories are [multi-format](https://www.youtube.com/watch?v=Wgn-zJ8R3fg). This means you can mix and match different package types in one repository. A NuGet package can sit beside a Maven package, a Docker, or an NPM package. ::: ## Create a Cloudsmith Organization {#create-organization} Before setting up a Cloudsmith repository, you should create an [Organization](https://help.cloudsmith.io/docs/organisations) and invite others to join the Organization. Creating an Organization in Cloudsmith gives you the ability to configure and manage access for teams, individuals, and machines that map to your company's organizational structure. You can create an Organization by clicking on the **+** dropdown on the top menu bar and selecting **New Organization**. :::figure ![create a new organization](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-new-org.png) ::: This will take you to the **Create Organization** form. You are required to enter a name for your Organization and a primary email address before creating your Organization (the organization name is checked to ensure it's unique before creating it). :::figure ![create a new organization](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-create-org.png) ::: Once you have created the Organization, the next step is to create a repository. For instructions on how to configure the settings for the Organization, including how to create teams and invite users, refer to the [Cloudsmith documentation](https://help.cloudsmith.io/docs/organisations). ## Create a Cloudsmith Repository {#create-repo} You can create a new repository in three ways: - Via the Cloudsmith CLI - Via the Website UI - Via the Cloudsmith API For this guide we will create a repository via the Cloudsmith Website UI. To create a repository via the CLI or API, see the [Cloudsmith documentation](https://help.cloudsmith.io/docs/create-a-repository). ### Create a repository via the Website UI {#create-repo-via-ui} You can create a repository by clicking on the **+** dropdown on the top menu bar and selecting **New Repository**. :::figure ![create a new repository](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-new-repo.png) ::: That will take you to the **Create Package Repository** form: :::figure ![create a new repository](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-create-repo.png) ::: Here you can create a new repo by selecting a Repository Owner (the Organization you want the repo to live under) and a name. You can also specify an optional *slug* (identifier) for the repository. The slug is what will appear in the URL for the repository. The identifier can only contain lowercase alphanumeric characters, hyphens, and underscores. If you don't specify an identifier, one will be automatically generated from the repository name for you. The Storage Region allows you to choose a geographic region for the repository (see [Custom Storage Regions](https://help.cloudsmith.io/docs/custom-storage-regions) for further details). Then you need to select the type; Public, Private or Open-Source. ## Upload your package to Cloudsmith {#upload-package-to-cloudsmith} Cloudsmith provides three ways to push your packages/files/assets into your repositories: - Upload via the package-specific native CLI / tools (where supported). - Upload via the API using tools/integrations, such as the official Cloudsmith CLI. - Upload directly via the website. Documentation for package-specific native CLI and tooling is available on the website within each repository. For example, after selecting `NuGet` as the package format to upload, a new form will pop up, click the link **upload setup documentation** and the following documentation is available: ![contextual documentation for uploading NuGet packages](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-new-package-native.png) The next section will give an of overview uploading your package using the package-specific native CLI for NuGet, Docker and Maven. For Helm we will use the Cloudsmith CLI. See the [Cloudsmith supported formats documentation](https://help.cloudsmith.io/docs/supported-formats) for more information. The commands that are included in this section should be entered into a command line shell, and it's assumed the commands are run in the same directory as your package. We will use this terminology in the following examples: | Identifier | Description | |------------|-----------------------------------------------------------------------| | OWNER | Your Cloudsmith account name or organization name (namespace) | | REGISTRY | Your Cloudsmith Repository name (also called *slug*) | | USERNAME | Your Cloudsmith Entitlement Token (see Entitlements for more details) | | USERNAME | Your Cloudsmith username | | PASSWORD | Your Cloudsmith password | | API-KEY | Your Cloudsmith API Key | | IMAGE_NAME | The name of your Docker image | | TAG | A tag for your Docker image | ### Install the Cloudsmith CLI tool {#install-cloudsmith-cli} Uploading a Helm package requires the Cloudsmith CLI to be installed. To install the Cloudsmith CLI, follow the [Cloudsmith CLI installation instructions](https://help.cloudsmith.io/docs/cli). ### Generate Package {#generate-package} Before you can upload, you need to generate a package first. You can do this with one of the following commands (click on the Tab that matches your package-specific CLI):
NuGet ```shell nuget pack ```
Maven ```shell mvn package ```
Helm ```shell helm package . ```
Docker ```shell docker save -o your-image.docker your-image:latest ```
### Add Cloudsmith as a Source {#add-cloudsmith-source} Once you have generated a package, you need to add Cloudsmith as a Source in one of the following ways:
NuGet ```shell nuget sources add -Name example-repo -Source https://nuget.cloudsmith.io/OWNER/REPOSITORY/v3/index.json ```
Maven ```xml # The distribution repositories define where to push your artifacts. # In this case it will be a single repository, but you can configure alternatives. # Add the following to your project pom.xml file: NAME https://maven.cloudsmith.io/OWNER/REPOSITORY/ NAME https://maven.cloudsmith.io/OWNER/REPOSITORY/ # You must also configure your ~/.m2/settings.xml file with the API key of the uploading user: NAME USERNAME API-KEY ```
Docker ```shell docker login docker.cloudsmith.io # You will be prompted for your Username and Password. # Enter your Cloudsmith username and your Cloudsmith API Key. ```
:::div{.hint} **Note:** There are no steps required to add Cloudsmith as a Source for Helm. ::: ### Publish Package {#publish-package} Finally, you can publish (or upload) your package to Cloudsmith using one of the following commands:
NuGet ```shell nuget push PACKAGE_NAME-PACKAGE_VERSION.nupkg -Source example-repo -ApiKey API-KEY ```
Maven ```shell mvn deploy ```
Helm ```shell # The command to upload a Helm chart via the Cloudsmith CLI is: cloudsmith push helm OWNER/REPOSITORY CHART_NAME-CHART_VERSION.tgz ```
Docker ```shell # To publish an image to a Cloudsmith-based Docker registry, you first need to tag your image:\ndocker tag IMAGE_NAME:TAG docker.cloudsmith.io/OWNER/REGISTRY/IMAGE_NAME:TAG # You can then publish the tagged image using docker push: docker push docker.cloudsmith.io/OWNER/REGISTRY/IMAGE_NAME:TAG ```
## Adding Cloudsmith as an External Feed to Octopus {#add-cloudsmith-feed-to-octopus} Now that we have created our repository we can add our Cloudsmith repository as an external feed in our Octopus instance. From the Octopus Web Portal, create a new external feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and selecting **ADD FEED**: - Select the Feed type (NuGet, Helm, Docker, Maven), - Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of your Cloudsmith repository. Refer to the [URLs for Feeds](#urls-for-feeds) section for more information. - Populate the credentials of your Cloudsmith repository if necessary. Refer to the [Adding Credentials for Private Repositories](#credentials-for-private-repos) section for more information. :::figure ![Select your Feed Type](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-octopus1.png) ::: ## URLs for Feeds {#urls-for-feeds} This section contains information about what Cloudsmith feed URL to use for your specific package. ### NuGet {#cloudsmith-nuget} Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and selecting the *NuGet* Feed type. :::figure ![NuGet Feed Type](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-octopus2.png) ::: - Give the NuGet feed a name - Enter the HTTP/HTTPS URL of the feed for your Cloudsmith NuGet repository using the version of NuGet that matches your configuration: NuGet V2: ``` https://nuget.cloudsmith.io/OWNER/REPOSITORY/v2 ``` NuGet V3: ``` https://nuget.cloudsmith.io/OWNER/REPOSITORY/v3/index.json ``` :::div{.hint} Private repositories require authentication. Refer to the [Adding Credentials for Private Repositories](#credentials-for-private-repos) section for more information on how to add your credentials ::: ### Docker {#cloudsmith-docker} Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and selecting the *Docker Container Registry* Feed type. ![Docker Feed Type](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-octopus3.png) - Give the Docker feed a name - Enter the HTTP/HTTPS URL of the feed for your Cloudsmith Docker repository in the following format: `https://docker.cloudsmith.io/v2/OWNER/REGISTRY/` :::div{.hint} Private repositories require authentication. Refer to the [Adding Credentials for Private Repositories](#credentials-for-private-repos) section for more information on how to add your credentials ::: ### Maven {#cloudsmith-maven} Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and selecting the *Maven* Feed type. ![Maven Feed Type](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-octopus4.png) - Give the feed a name - Enter the HTTP/HTTPS URL of the feed for your Cloudsmith Maven repository from the options below that match your configuration: - Public URL with no authentication: `https://dl.cloudsmith.io/public/OWNER/REPOSITORY/maven/` - Entitlement Token Authentication: `https://dl.cloudsmith.io/TOKEN/OWNER/REPOSITORY/maven/` - HTTP Basic Authentication: `https://dl.cloudsmith.io/basic/OWNER/REPOSITORY/maven/` :::div{.hint} Private repositories require authentication. Refer to the [Adding Credentials for Private Repositories](#credentials-for-private-repos) section for more information on how to add your credentials ::: ### Helm {#cloudsmith-helm} Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and selecting the *Helm* Feed type. :::figure ![Helm Feed Type](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-octopus5.png) ::: - Give the feed a name - Enter the HTTP/HTTPS URL of the feed for your Cloudsmith Helm repository from the options below that match your configuration: - Public URL with no authentication: `https://dl.cloudsmith.io/public/OWNER/REPOSITORY/helm/charts/` - Entitlement Token Authentication: `https://dl.cloudsmith.io/TOKEN/OWNER/REPOSITORY/helm/charts/` - HTTP Basic Authentication: `https://dl.cloudsmith.io/basic/OWNER/REPOSITORY/helm/charts/` :::div{.hint} Private repositories require authentication. Refer to the [Adding Credentials for Private Repositories](#credentials-for-private-repos) section for more information on how to add your credentials ::: ## Adding Credentials for Private Repositories {#credentials-for-private-repos} Private Cloudsmith repositories require authentication. If you used a token in the URL then you do not need to add additional credentials. You can choose between two types of authentication: - Entitlement Token Authentication - HTTP Basic Authentication. The setup method will differ depending on what authentication type you choose to use. :::div{.warning} **Securing credentials:** Entitlement Tokens, User Credentials and API-Keys should be treated as secrets and should be stored in a secure location, such as a Password Manager. You should avoid committing them into source control or exposing them in configuration and log files. ::: When you are adding or editing your external feed, you can add credentials for your feed by populating the *Credentials* section. :::figure ![Credentials for your external feed](/docs/img/packaging-applications/package-repositories/guides/images/cloudsmith-octopus6.png) ::: Provide one of the following three types of credentials: - Cloudsmith Basic Authentication using your Username and Password - Cloudsmith API Key - An Entitlement Token These will be populated in the Credentials section of the Octopus External Feed. :::div{.hint} For more information about credentials refer to the [Cloudsmith documentation](https://help.cloudsmith.io/docs/docker-registry#private-registries). ::: ### Basic Authentication {#credentials-basic-auth} For Basic authentication, add your Username and Password into the External feed: - Feed username: `USERNAME` - Feed password: `PASSWORD` ### API Key {#credentials-api-key} For API Key authentication, add your Username and an API Key into the External feed: - Feed username: `USERNAME` - Feed password: `API-KEY` ### Entitlement Token {#credentials-entitlement-token} For Entitlement Token authentication, populate the credentials with the word `token` for the username, and the token value for the password: - Feed username: the word `token` - Feed password: `TOKEN` # Azure Container Registry Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries/azure-container-services.md Microsoft Azure provides a Docker image registry known as [Azure Container Registry](https://azure.microsoft.com/en-au/services/container-registry/). ## Configuring an Azure Container Registry Select **Azure Container Registry** from the Azure marketplace and select **create** to create a new registry. Provide the unique registry name that all your repositories (packages) will be stored in. Make sure you select **Enable** under the **Admin user** option. This is what will expose the credentials that are needed by Octopus to connect to the API. :::figure ![Azure Container Services Access Key blade](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/azure-blade.png) ::: Azure Container Registries can be configured as an external feed in Octopus by navigating to **Deploy ➜ Manage ➜ External Feeds** and adding an new feed of type `Azure Container Registry`. Once the service has been provisioned, go to the Container Registry details and load the **Access Key** blade. The login server indicates the HTTPS url that needs to be supplied into the Octopus Registry feed. In the case above this will be `https:\\myoctoregistry-on.azurecr.io`. With the Admin user toggle enabled, you will be provided with username and password credentials, these will needed these when you create the Octopus Deploy feed. The password can be regenerated at any time so long as you keep your Octopus instance updated with the new credentials. ## Adding an Azure Container Registry as an Octopus External Feed Create a new Octopus Feed (**Deploy ➜ Manage ➜ External Feeds**) and select the `Azure Container Registry` Feed type. With this selected you will need to provide the username and password credentials configured above. :::figure ![Azure Container Services Registry Feed](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/azure-feed.png) ::: Save and test your registry to ensure that the connection is authorized successfully. ## Adding an Azure Container Registry with OpenID Connect as an Octopus External Feed Octopus Server `2025.2` adds support for OpenID Connect to ACR feeds. To use OpenID Connect authentication you have to follow the [required minimum configuration](/docs/infrastructure/accounts/openid-connect#configuration). Before creating an OpenID Connect Azure Container Registry feed, you will need an Microsoft Entra ID App Registration and a Federated Credential. If you do not currently have an Microsoft Entra ID App Registration follow the [App Registration](https://oc.to/create-azure-app-registration) guide. To manually create a Federated Credential follow the [Add a federated credential](https://oc.to/create-azure-federated-credentials) section in the Microsoft Entra ID documentation. The federated credential will need the **Issuer** value set to the configured Octopus Server URI. This URI must be publicly available and the value must not have a trailing slash (/). For example `https://samples.octopus.app`. For more information on configuring external identity providers see [Configure an app to trust an external identity provider](https://oc.to/configure-azure-identity-providers). Create a new Octopus Feed (**Deploy ➜ Manage ➜ External Feeds**) and select the `Azure Container Registry` Feed type. With this selected you will need choose OpenID Connect as the authentication type. Add the following properties to the feed credentials: - **Client ID:** *{{The Azure Active Directory Application ID (Client ID)}}* - **Tenant ID:** *{{The Azure Active Directory Tenant ID}}* - **Subject:** *Please read [OpenID Connect Subject Identifier](/docs/infrastructure/accounts/openid-connect#subject-keys) for how to customize the **Subject** value* - **Audience** *{{The audience set on the Federated Credential}}* *This can be set to the default of* `api://AzureADTokenExchange` *or a custom value if needed* :::div{.warning} At this time, OpenID Connect external feeds are not supported for use with Kubernetes containers. This is because the short-lived credentials they generate are not suitable for long-running workloads. ::: # Artifactory Maven repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/maven-repositories/artifactory-maven-feed.md Artifactory provides support for a number of [Maven repositories](https://jfrog.com/help/r/jfrog-artifactory-documentation/maven-repository) including Local, Remote and Virtual repositories. An Artifactory Local Artifactory repository can be configured in Octopus as an external [Maven feed](/docs/packaging-applications/package-repositories/maven-feeds). ## Configuring an Artifactory Local NuGet repository :::div{.hint} This guide was written using Artifactory version `7.11.5`. ::: From the Artifactory web portal, navigate to **Administration ➜ Repositories**. From there, choose **Add Repositories ➜ Local Repository**: ![Artifactory repositories addition](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/artifactory-local-maven-repo-add.png) From the Package Type selection screen, choose **Maven**: :::figure ![Artifactory local repository](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/artifactory-select-maven-repository.png) ::: Give the repository a name in the **Repository Key** field, and fill out any other settings for the repository. :::figure ![Artifactory local repository settings](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/artifactory-local-maven-repo-configure.png) ::: When you've entered all settings, click **Save & Finish**. ### Configure repository authentication With the repository configured, the next step is to configure access so Octopus can retrieve package information. The recommended way is to either configure a [user](https://jfrog.com/help/r/jfrog-platform-administration-documentation/manage-users) with sufficient permissions, or use an [access token](https://jfrog.com/help/r/jfrog-platform-administration-documentation/access-tokens). This user is the account which Octopus will use to authenticate with Artifactory. :::div{.warning} Every organization is different and the authentication example provided here is only intended to demonstrate functionality. Ensure you are complying with your company's security policies when you configure any user accounts and that your specific implementation matches your needs. ::: From the Artifactory web portal, navigate to **Administration ➜ Identity and Access ➜ Users** and select **New User**. :::figure ![Artifactory Add user](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-user.png) ::: Fill out the **Username**, **Email Address**, **Password** and any other settings. :::div{.hint} If you have an existing group to add the user to, you can do that here. Alternatively you can add the user account when creating a new group. ::: When you've entered all settings, click **Save**. Next, we need to ensure the user is in a [group](https://jfrog.com/help/r/jfrog-platform-administration-documentation/manage-groups) which can access our new repository. From the Artifactory web portal, navigate to **Administration ➜ Identity and Access ➜ Groups** and select **New Group**. :::figure ![Artifactory Add Group](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-group.png) ::: Fill out the **Group Name** and any other settings. Ensure the user you created earlier is included in the group (in the right hand column). When you've entered all settings, click **Save**. Lastly, we need to ensure the group has [permissions](https://jfrog.com/help/r/jfrog-platform-administration-documentation/permissions) for Octopus to retrieve package information. From the Artifactory web portal, navigate to **Administration ➜ Identity and Access ➜ Permissions** and select **New Permission**. From there, give the permission a **Name**, and choose the **Add Repositories** option: :::figure ![Artifactory add permission](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission.png) ::: From the repository selection screen, choose the newly created repository so that it's in the **Included Repository** column and click **OK**: :::figure ![Artifactory add permission repository](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-repo.png) ::: Next, switch to the **Groups** tab, and add a new group from **Selected Groups**: :::figure ![Artifactory add permission group](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-add-group.png) ::: From the groups selection screen, choose the newly created group, or an existing group so that it's in the **Included Group** column and click **OK**. :::figure ![Artifactory permissions include group](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-include-group.png) ::: Finally, choose the permissions to grant the group on the included repositories: :::figure ![Artifactory repository permissions](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/artifactory-local-nuget-add-permission-repo-permissions.png) ::: :::div{.hint} Octopus needs `Read` permissions as a minimum on the Local repository in order to search and download packages. ::: When you've entered all settings, review your permissions are configured how you want, and click **Create**. :::div{.hint} You can also choose individual users to assign this permission to. ::: ### Anonymous authentication An alternative to configuring a user is to enable [anonymous access](https://jfrog.com/help/r/jfrog-artifactory-documentation/anonymous-access-to-nuget-repositories) on the NuGet repository. ## Adding an Artifactory Local Maven repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `Maven Feed` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the feed for your Artifactory Local repository in the format: `https://your.artifactory.url:port/artifactory/api/nuget/v3/local-nuget-repo` Replace the URL and port from the example above. In addition, replace `local-maven-repo` with the name of your Local NuGet repository. :::figure ![Artifactory Local Maven feed](/docs/img/packaging-applications/package-repositories/guides/maven-repositories/images/artifactory-external-feed.png) ::: Save and test your feed to ensure that the connection is authenticated successfully. # GitLab NuGet repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories/gitlab-nuget-feed.md GitLab creates a NuGet Registry for each Project or Group. To add the NuGet Registry to Octopus Deploy as an external feed, you will first need to get the Project or Group Id Project Id :::figure ![GitLab Project Id](/docs/img/packaging-applications/package-repositories/guides/images/gitlab-project-id.png) ::: Group Id :::figure ![GitLab Group Id](/docs/img/packaging-applications/package-repositories/guides/images/gitlab-group-id.png) ::: ## Adding a GitLab NuGet repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `NuGet Feed` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the feed for your GitLab Project or Group in the format: Project: `https://your.gitlab.url/api/v4/projects/[project id]/packages/nuget/index.json` Group: `https://your.gitlab.url/api/v4/groups/[group id]/-/packages/nuget/index.json` Replace the URL from the examples above. :::figure ![GitLab NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/gitlab-octopus-add-nuget-feed.png) ::: Optionally add Credentials if they are required. # Maven feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/maven-feeds.md Maven repositories can be configured as an external feed, allowing the artifacts contained in the repository to be consumed by the Octopus steps that deploy Java packages, as well as the generic **Transfer a package** step. ## Adding an external Maven feed The following steps can be followed to add an external Maven feed. 1. Select **Deploy ➜ Manage ➜ External Feeds** and click the **ADD FEED** button. 2. Select **Maven Feed** from the **Feed Type** field. 3. Enter the name of the feed in the **Feed name** field. 4. In the **Feed url** field, enter the URL of the Maven feed. The URL must point to the directory under which the initial directory that makes up the Maven artifact group ids are found. For example, for the Maven central repo the URL is `https://repo.maven.apache.org/maven2` and the SonaType Snapshot repo URL is `https://oss.sonatype.org/content/repositories/snapshots`. 5. If the Maven repository is password protected, the credentials can be entered into the **Feed login** and **Feed password** field. 6. The **Download attempts (attempts)\** field defines the number of times that Octopus will attempt to download an artifact from a Maven repository. Failed attempts will wait for the number of seconds defined in the **Download retry backoff (seconds)** field before attempting to download the artifact again. :::div{.hint} When configuring external Maven repositories, we need to link to the repository itself and not the services that are used to search the repositories. For example URLs like [https://search.maven.org/](https://search.maven.org/) or [https://mvnrepository.com/](https://mvnrepository.com/) can't be entered because these are sites for searching the repositories, and not the repositories themselves. ::: ## Referencing Maven artifacts When referencing a Maven artifact, the package ID is in the format `group:artifact`. For example, to reference the Maven artifact with the group of `org.wildfly.swarm.testsuite` and artifact of `testsuite-https` (i.e. the artifacts found at https://repo.maven.apache.org/maven2/org/wildfly/swarm/testsuite/testsuite-https/), you would enter a package ID of `org.wildfly.swarm.testsuite:testsuite-https`. :::figure ![Maven Artifact Names](/docs/img/packaging-applications/package-repositories/images/maven-artifact-names.png) ::: Prior to 2020.3.0, the packaging type is determined automatically from the extensions supported by Octopus, which are: * zip * jar * ear * rar * war So the package ID `org.wildfly.swarm.testsuite:testsuite-https` for version `2017.10.0` would download the WAR file https://repo.maven.apache.org/maven2/org/wildfly/swarm/testsuite/testsuite-https/2017.10.0/testsuite-https-2017.10.0.war. Since 2020.3.0, Maven artifacts can be specified with an optional packaging and classifier. For example, the artifact ID of `org.example:my-artifact:zip` will select the ZIP package with the group `org.example` and the artifact ID of `my-artifact`, or `org.example:my-artifact:jar:sources` will select the JAR package with the `sources` classification. The packaging must be defined when using a classifier. If no packaging selection is specified, the first matching package is selected from the list of extensions above. ## Searching for Maven artifacts As Maven repositories do not expose an API (repositories are just a filesystem structure), there is no way to search them in Octopus the way you might search a NuGet repository. The package ID for a Maven artifact must be complete for Octopus to identify it, and partial package IDs will not return a list of partial matches. :::figure ![Maven Package Suggestion](/docs/img/packaging-applications/package-repositories/images/maven-package-suggestion.png) ::: ## Downloading SNAPSHOT releases When downloading SNAPSHOT releases of Maven artifacts, the latest SNAPSHOT version will be downloaded. This version is then saved in the Octopus cache, and will be reused in subsequent deployments of the same SNAPSHOT version. What this means is that if a new SNAPSHOT artifact is published to the Maven repository after Octopus has saved the previous SNAPSHOT artifact to its local cache, Octopus will continue to use the older locally cached version. To force Octopus to download the newer SNAPSHOT release, select the **Re-download packages from feed** option when deploying. :::figure ![Re-download packages from feed](/docs/img/packaging-applications/package-repositories/images/redownload-from-feed.png) ::: ## Versioning with Maven feeds When using artifacts from a Maven feed, the [Maven versioning scheme](https://octopus.com/blog/maven-versioning-explained) is used. The following qualifiers in the version are used to indicate that it is a pre-release version: * `alpha` (or the `a` shorthand) e.g. `1.0.0-alpha1` or `1.0.0-a1`. * `beta` (or the `b` shorthand) e.g. `1.0.0-beta1` or `1.0.0-b1`. * `milestone` (or the `m` shorthand) e.g. `1.0.0-milestone1` or `1.0.0-m1`. * `rc` or `cr` e.g. `1.0.0-rc1` or `1.0.0-cr1`. * `SNAPSHOT` e.g. `1.0.0-SNAPSHOT`. ## Version ranges with Maven feeds When defining versions ranges against artifacts sourced from a Maven feed (when defining a channel rule for example), the [Maven range specification](https://oc.to/MavenVersioning) is used. The table below shows some common examples of Maven version ranges. | Range | Meaning | |-|-| | 1.0 | x >= 1.0 * The default Maven meaning for 1.0 is everything (,) but with 1.0 recommended. Obviously this doesn't work for enforcing versions here, so it has been redefined as a minimum version. | | (,1.0] | x <= 1.0 | | (,1.0) | x < 1.0 | | [1.0] | x == 1.0 | | [1.0,) | x >= 1.0 | | (1.0,) | x > 1.0 | | (1.0,2.0) | 1.0 < x < 2.0 | | [1.0,2.0] | 1.0 <= x <= 2.0 | | (,1.0],[1.2,) | x <= 1.0 or x >= 1.2. Multiple sets are comma-separated | | (,1.1),(1.1,) | x != 1.1 | ## Troubleshooting Maven feeds 1. Can you download the POM file directly from the Maven repository from the Octopus Server? For example, the Google Guava POM file for version 24.0-jre is https://repo.maven.apache.org/maven2/com/google/guava/guava/24.0-jre/guava-24.0-jre.pom. If you can not, then there is likely to be an issue with the URL or your network settings. 2. The Maven URL to be configured in Octopus includes the path up to the to the start of the group id. For Guava, the group id is `com.google.guava`. This maps to the `com/google/guava` component of the URL. So the Maven URL to be configured in Octopus is https://repo.maven.apache.org/maven2/, because this is the part of the URL that does not include the group id. 3. Maven artifacts must be referenced in the `group:artifact` format. For Guava, the group is `com.google.guava` and the artifact is `guava`. So this would be referenced in Octopus as `com.google.guava:guava`. # Built-in step templates Source: https://octopus.com/docs/projects/built-in-step-templates.md Octopus includes built-in step templates that have been developed by the Octopus team to handle the most common deployment scenarios. In addition to the built-in step templates, there are also [Community Step Templates](/docs/projects/community-step-templates/) that have been contributed by the community. You can also use the built-in step templates as the base to create [custom step templates](/docs/projects/custom-step-templates) to use across your projects. ## Adding steps to your deployment processes 1. Navigate to your [project](/docs/projects). 2. Click the **Create process** button. 3. Find the step template you need and click **Add step**. At this point, you have the choice of choosing from the built-in **Installed Step Templates** or the [Community Contributed Step Templates](/docs/projects/community-step-templates). If you're looking for example deployments, see the [deployment examples](/docs/deployments#getting-started-with-deployments). 4. Give the step a short memorable name. 5. The **Execution Location** tells the step where to run. Depending on the type of step you are configuring the options will vary: - [Worker pool](/docs/infrastructure/workers/worker-pools) - Worker pool on behalf of roles - Deployment targets 6. If you are deploying to deployment targets or running the step on the server on behalf of deployment targets, you can deploy to all targets in parallel (default) or configure a rolling deployment. To configure a rolling deployment click *Configure a rolling deployment* and specify the window size for the deployment. The window size controls how many deployment targets will be deployed to in parallel. Learn more about [rolling deployments](/docs/deployments/patterns/rolling-deployments-with-octopus). 7. The next section of the step is where you specify the actions for the step to take. If you are running a script or deploying a package, this is where you provide the details. This section will vary depending on the type of step you're configuring. If you're deploying packages you'll likely need to set your [configuration variables](/docs/projects/steps/configuration-features/xml-configuration-variables-feature). 8. After providing the actions the steps takes, you can set the conditions for the step. You can set the following conditions: - Only run the step when deploying to specific environments. - Only run the step when deploying a release through a specific [channel](/docs/releases/channels). - Set the step to run depending on the status of the previous step. - Set when package acquisition should occur. - Set whether or not the step is required. Learn more about [conditions](/docs/projects/steps/conditions). 9. Add additional steps. 10. Save the deployment process. With your deployment process configured you're ready to create a [release](/docs/releases). # Automatic step template updates Source: https://octopus.com/docs/projects/built-in-step-templates/automatic-updates.md Built-in step templates that use the new "step package" format can be updated automatically to the latest versions without updating Octopus Server. Octopus will check for updates to the built-in step templates every hour and automatically download them from the publicly available feed located at steps-feed.octopus.com. Optionally, the automatic version updates of built-in steps can be turned off by navigating to **Configuration ➜ Features** and turning off the **Step Template Updates** feature. :::figure ![](/docs/img/projects/built-in-step-templates/images/automatic-updates-configuration.png) ::: ## Notes * Existing deployment processes and runbooks will be automatically updated to use the latest **minor version** of the built-in step templates, without any user-intervention. This enables rapid deployment of security and patch fixes in a backward compatible manner. * **Major version** upgrades of steps within existing deployment processes and runbooks will require manual intervention, as the steps will not be backward compatible and likely require additional input. :::figure ![](/docs/img/projects/built-in-step-templates/images/step-migration-v2.png) ::: * Only steps that are compatible with the current Octopus Server version will be automatically downloaded and updated. * Only the steps built with the new "step package" format are updated using the described mechanism. Existing steps will still require Octopus to be updated to receive new versions. ## Older versions Automatic step template updates are available from Octopus **2022.1**. # Exporting and Importing Projects Source: https://octopus.com/docs/projects/export-import.md :::div{.hint} For full instructions on migrating to Octopus Cloud see our [migration docs](/docs/octopus-cloud/migrations) ::: :::div{.warning} Migrations can only be performed from an earlier or equal version of Octopus Server. It cannot be used to migrate resources to an older version. ::: The `Export/Import Projects` feature can export one or more projects into a zip file, which can then be imported into other spaces. The target space may be in a different Octopus Server instance, and projects can be exported and imported between self-hosted and Octopus Cloud instances (see below for some [specific considerations when moving a project to Octopus Cloud](#octopus-cloud)). Export/Import features are found in the overflow menu on the **Projects** page. :::figure ![Import Export Menu](/docs/img/projects/export-import/import-export-menu.png) ::: When exporting, a password is required to assist with encryption. The password should be treated carefully, as it will be used to decrypt any sensitive values contained within the export when importing the project(s) into Octopus. :::figure ![Export projects](/docs/img/projects/export-import/export-project-page.png) ::: The export runs as a task. Once the task is complete, the export zip file is attached as an artifact and available for download. :::figure ![Export zip artifact](/docs/img/projects/export-import/export-task-artifact.png) ::: ## Scenarios The current implementation of the Export/Import feature was designed for moving a project between spaces, specifically: - Moving from a self-hosted instance to an Octopus Cloud instance - Splitting a space containing many projects into multiple spaces Scenarios this feature was _not_ designed for include: - Backup/restore. See our [recommended approach](/docs/administration/data/backup-and-restore) to disaster-recovery for your Octopus instance. - Cloning projects _within_ a space. There is an [easier way to achieve this](/docs/projects/#clone-a-project). - Promoting changes between environments on different Octopus instances. See below. ### Promoting changes between Octopus instances There are scenarios where it is desirable to create releases and deploy them to test environments on a development Octopus instance before promoting the changes to another instance. This can be due to reasons including: - security requirements (e.g. air-gapped environments) - multi-tenancy (deploying Octopus to customer infrastructure) - maintaining strict control over the changes made to the production Octopus instance The export/import feature does not currently support these promotion scenarios. It will not import a project if it already exists in the target space. The ability to import an existing project will likely be added in a future release. ## What is imported The root of the export/import is a project (or multiple projects). The simple rule-of-thumb is everything the project references is included. Specifically: - The project (name, settings) - The deployment process and runbooks - Project variables - Channels, and all lifecycles referenced - Environments (see [below](#environments) for details) - [Tenants](#tenants) connected to the project - [Accounts](#accounts) and [certificates](#certificates) used by the project - [Variable sets](#library-variable-sets) included in the project - [Step templates](#step-templates) used in the deployment process or runbooks - Other projects referenced by [Deploy Release steps](/docs/projects/coordinating-multiple-projects/deploy-release-step) - [Git credentials](#git-credentials) used for a projects version-control settings It is worth explicitly mentioning some things that are **not included**: - [Packages](#packages) - [Deployment targets](#deployment-targets) - [Audit logs](#audit-logs) - [Workers](#workers) - [Project logos](#project-logos) - [Triggers](#triggers) - [Limitations for version-controlled projects](#limitations-for-version-controlled-projects) - [Project release deployment history](#audit-logs) - [Runbook run history](#audit-logs) ### Shared resources \{#shared-resources} The Octopus Deploy data-model is a web, not a graph. Some resources are shared between projects (environments, tenants, accounts, step templates, etc), and these shared resources are exported with the project. In general, these shared resources are matched by name when importing; i.e. if there is an existing resource with the same name as one in the source then it will be used, otherwise it will be created. Sometimes the import will need to merge some information on import. Some specific examples are mentioned below. ### Environments Any environments which can be reached via the project will be included in the export. This includes: - Environments included in any of the project's lifecycles, *except* when using the [default lifecycle](/docs/releases/lifecycles/#default-lifecycle). - Environments used to scope variables in any [variable sets](/docs/projects/variables/library-variable-sets) connected to the project - Environment restrictions defined on any accounts or certificates referenced by the project :::div{.warning} **Environments from the default lifecycle are not exported:** If your projects use the [default lifecycle](/docs/releases/lifecycles/#default-lifecycle) that Octopus creates, environments associated with that lifecycle will *not* be included in the project export. This was an intentional design decision made to avoid some tricky, unexpected behavior during project import. ::: ### Deployment targets \{#deployment-targets} [Deployment targets](/docs/infrastructure/deployment-targets) are not included in the export. They will need to be recreated in the target space. For Tentacle deployment targets (both Windows and Linux), there are specific considerations: **Listening Tentacles** must be configured to trust the certificate of the Octopus Server. If you are importing your project into a different Octopus instance, for the new instance to be able to communicate with existing listening Tentacles, the following must be true: - The Tentacles are accessible by the new Octopus instance (i.e. networking and firewalls must be correctly configured) - The Tentacles are configured to trust the certificate of the new instance. This can be done using the Tentacle [configure](/docs/octopus-rest-api/tentacle.exe-command-line/configure) command. An alternative is to create a new Tentacle on the same machine. This gives the option to switch to a polling Tentacle (which may be preferable when migrating a project to Octopus Cloud), and allows having both the original and cloned project deployable for a period of time. **Polling Tentacles** can be configured to poll multiple Octopus servers using the [register-with](/docs/octopus-rest-api/tentacle.exe-command-line/register-with) command. ### Packages \{#packages} Packages from the built-in feed are _not_ included in the export (this is to avoid extremely large export bundles). Packages can be copied between spaces via the Octopus API. [This PowerShell script](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Feeds/SyncPackages.ps1) does this (please consider the [package storage limits when moving packages to Octopus Cloud](#octopus-cloud)) ### Users \{#users} Users are not exported, as they are not directly associated with projects. Any teams which are referenced by projects (for example via manual intervention steps or email steps) will be created if they do not exist in the target space. These teams will be empty. ### Workers \{#workers} [Workers](/docs/infrastructure/workers/) are not included in the export. [Worker pools](/docs/infrastructure/workers/worker-pools) referenced by any steps (or variables) will attempt to match by name on the target, and if a matching pool does not exist then an empty pool will be created. If moving from a self-hosted to an Octopus Cloud instance, any steps which are configured to `Run on Server` will be converted to run on the default worker pool on import (`Run on server` is not supported on Octopus Cloud). If moving from an Octopus Cloud instance to a self-hosted instance, [Dynamic Worker Pools](/docs/infrastructure/workers/dynamic-worker-pools) will be converted to static worker pools on import (dynamic worker pools are not supported on self-hosted instances). ### Audit logs \{#audit-logs} [Audit events](/docs/security/users-and-teams/auditing) are not exported. ### Tenants All [tenants](/docs/tenants) connected to the project will be included in the export. On import, for any tenants which already exist on the destination the project/environment connections in the export will be merged into the existing tenant. ### Variable Sets \{#library-variable-sets} [Variable sets](/docs/projects/variables/library-variable-sets) connected to the project will be exported, including all variables. When importing, if a variable set with the same name already exists, the variables will be merged. If a variable in the export doesn't exist on the destination, it will be created. If a variable with the same name and scopes already exists, the variable on the destination will be left untouched. ### Step templates [Step templates](/docs/projects/custom-step-templates) used in the project's deployment or runbook processes will be included in the export. :::div{.hint} Care should be taken with step templates when exporting/importing projects at different times ::: Projects reference specific versions of a step template. When importing, if a step template with the same name and version already exists on the destination the existing step template version will be used. If the step template already exists, but the imported version is greater than the latest on the destination then the version included in the import will be imported into the destination, effectively incrementing the step template. Existing projects on the destination will initially not be impacted, as they will be referencing a specific version which will remain unchanged, but care should be taken on future updates of the step template version in these projects. ### Accounts Any accounts which can be referenced via the project will be included in the export. This includes: - Accounts which are the value of the project's variables - Accounts which are the value of variables in variable sets connected to the project - Accounts referenced directly from deployment process steps When importing, if an account with the same name already exists on the destination, the existing account will be used. ### Certificates Any certificates which can be referenced via the project will be included in the export. This includes: - Certificates which are the value of the project's variables - Certificates which are the value of variables in variable sets connected to the project When importing, if a certificate with the same name already exists on the destination, the existing certificate will be used. ### Project logos The project logo will be available when exporting between spaces on the same instance. If exporting between instances, the logo will have to be re-uploaded. ### Triggers Triggers are also not currently included, and will need to be reconfigured in the destination instance. ### Git Credentials Git credentials used to connect a project to Git will be exported. ### Limitations for version-controlled projects For version-controlled projects the following resources will not be exported - [GitHub connections](/docs/projects/version-control/github) - Resources used by steps referenced by slug - Resources used by steps referenced by ID #### GitHub connections :::div{.warning} GitHub connections used by version-controlled projects will not be exported, as we can't re-configure GitHub connections on the destination side because IDs and other details are all instance-specific on the GitHub App side. ::: Version-controlled projects using GitHub connections can still be exported but with a couple of options before importing the project: - The original project needs to be deleted (as it will still point to the same folder as the imported project and this is not supported), or - The path to where the files are stored in Git needs to be changed on either the original or imported project, so they do not clash, before reconnecting the imported project to GitHub #### Resources used by steps referenced by slug The following resources that can be used on steps will not be exported but as they are referenced by their slug, recreating the resource in the destination with the same name/slug would link these back up without any other manual intervention. - Feeds - Accounts used in variables that are stored in Git - Projects on `Deploy a Release` steps #### Resources used by steps referenced by ID The following resources that can be used on steps will not be exported and as they are referenced by their IDs, after recreating the resource in the destination you would then need to manually update the files stored in Git to use the correct ID of the resource again. - Git credentials used to source files from Git - Library step templates - The template version needs to be updated along with the ID - Certificates used in variables that are stored in Git ## Moving to Octopus Cloud \{#octopus-cloud} When moving a project from a self-hosted Octopus Server instance to an Octopus Cloud instance, [limits apply](/docs/octopus-cloud/#octopus-cloud-storage-limits) which should be considered. Specifically: - Maximum File Storage for artifacts, task logs, packages, package cache, and event exports is limited to `1 TB`. - Maximum Database Size for configuration data (e.g. projects, deployment processes and inline scripts) is limited to `100 GB`. - Maximum size for any single package is `5 GB`. - [Retention policies](/docs/administration/retention-policies) are *defaulted* to 30 days, but this figure can be changed as required. There are some caveats around [worker pools](#workers). ## Using the API \{#using-the-api} :::div{.hint} Automating the export and import of projects using the REST API as part of a backup/restore process is **not recommended**. See our [supported scenarios](#scenarios). ::: You can use the [Octopus REST API](/docs/octopus-rest-api) to export or import Octopus projects. To find out more take a look at our examples: - [Export projects](/docs/octopus-rest-api/examples/projects/export-projects) - [Import projects](/docs/octopus-rest-api/examples/projects/import-projects) ## Older versions - Prior to version **2025.2.5601**, version-controlled projects were not supported by the **Export/Import Projects** feature. # Prompted variables Source: https://octopus.com/docs/projects/variables/prompted-variables.md As you work with [variables](/docs/projects/variables) in Octopus, there may be times when the value of a variable isn't known and you need a user to enter the variable at deployment time. Octopus can handle this using **Prompted variables**. ## Defining a prompted variable {#defining-prompted-variable} To make a variable a **prompted variable**, enter the variable editor when creating or editing the variable. On the variable value field, click **Open Editor**: :::figure ![Open variable editor](/docs/img/projects/variables/images/open-variable-editor.png) ::: When defining a prompted variable, you can provide a friendly name and description, and specify if the value is required. A required variable must be supplied when the deployment is created and must not be empty or white space. :::figure ![Prompted variable](/docs/img/projects/variables/images/prompted-variable.png) ::: You can identify prompted variables by looking for the icon next to the value: :::figure ![](/docs/img/projects/variables/images/prompted-variable-icon.png) ::: :::div{.hint} You can select one of several different data types. This controls the user interface provided to collect the variable value, and determines how the variable value is interpreted. Note the variable values will be stored and interpreted as text. Control type options are: - Single-line text box - Multi-line text box - Drop-down - Checkbox ::: ## Providing a value for the variable {#providing-value-for-variable} When deploying (not creating a release), you'll be prompted to provide a value for the variable: :::figure ![Required prompted variable](/docs/img/projects/variables/images/3278301.png) ::: These variables will be ordered alphabetically by label (or name, if the variable label is not provided). A value can also be passed to a prompted variable when using the Octopus CLI through the `--variable` parameter of the [octopus release deploy](/docs/octopus-rest-api/cli/octopus-release-deploy) command. ```bash; octopus release deploy ... --variable "Missile launch code:LAUNCH123" --variable "Variable 2:Some value" ``` :::div{.hint} Prompted variables can be combined with [sensitive variables](/docs/projects/variables/sensitive-variables/). They will appear with a password box when creating the deployment. They can also be combined with [Azure Account variables](/docs/projects/variables/azure-account-variables/), [AWS Account variables](/docs/projects/variables/aws-account-variables/), [Certificate variables](/docs/projects/variables/certificate-variables/), and [Worker Pool variables](/docs/projects/variables/worker-pool-variables), passing in the ID, e.g. `WorkerPools-1`. ::: ## Restricting a prompted variable for runbooks By default, a prompted variable will prompt when deploying a release and when executing any runbooks in the project. Prompted variables can be [scoped to specific processes](/docs/runbooks/runbook-variables/#prompted-variables), causing them to only be shown when deploying releases, or only when executing runbooks. ## Prompted variable ordering When Octopus renders prompted variables for a deployment or runbook, they are sorted alphabetically by the prompted variable label. If you want to customize the order in which the variables appear, one option is to include a numerical prefix in the label: :::figure ![](/docs/img/projects/variables/images/prompted-variable-custom-sort.png) ::: ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # Creating and deploying releases on a version-controlled project Source: https://octopus.com/docs/projects/version-control/creating-and-deploying-releases-version-controlled-project.md There are slight differences when creating and deploying a release with a version-controlled project using the Configuration as Code feature in Octopus Deploy. This page will walk through those differences. ## Creating a release When you create a release with a version-controlled Octopus Project, you will have the ability to select the branch and specify a release number and package versions. Like before, a snapshot will be created using the deployment process, variables, and packages. ### Creating a release via the UI When you create a release via the UI, you must specify a branch name. Octopus will select the default branch configured in the project settings. When the **SAVE** button is pressed, a snapshot will be created using OCL files from the most recent commit from the specified branch for the Snapshot. :::figure ![creating a release via the Octopus UI](/docs/img/projects/version-control/create-release-octopus-ui.png) ::: ### Creating a release from a build server plug-in Octopus does not guess or autopopulate the commit or branch when creating a release from a build-server plug-in. Instead, to provide this information, we have added two new fields to our standard integrations - TeamCity, Azure DevOps, Jenkins, GitHub Actions, and Bamboo. * Git Reference - a user-friendly alias for a commit hash. This is typically a branch name or tag. * Git Commit - the commit SHA-1 hash. The use of these fields can change depending on where the version-controlled project's OCL files are stored: 1. If the OCL files are stored in the **same repository** as the application(s) being built, it's likely that a specific commit that relates to any artifacts created as part of the build itself should be used when creating the release. In this scenario, you should provide both the Git Reference and Git Commit hash of the executing build. This ensures that the release will use the correct version of the project, and won't include any potential changes made to the `HEAD` of the branch *before* the build has completed. 2. If the OCL files are stored in a **different repository** than the application(s) being built, a specific branch or tag can identify which version of the project to use when creating the release. In this case, you would provide the Git Reference where the OCL files are stored, and not the Git Commit hash. e.g. Use the `main` branch, regardless of the location of the repository where the application(s) are being built as they are different. :::div{.hint} Octopus and your build server have a different copy of your git repo. Sending in the commit or reference via the plug-in or the CLI is your build server's way of telling Octopus Deploy which copy of your OCL files to use. ::: For more information, see examples of [creating a release from a build server plug-in](/docs/projects/version-control/creating-release-from-a-build-server-plug-in). ### Snapshot The deployment process stored in OCL files in your git repo will be included in the release snapshot as part of the release creation process. ## Deploying the release The experience for deploying a release created from a deployment process using OCL files is the same as one created from a deployment process stored in SQL Server. Once that release snapshot is created, the Octopus UI behaves the same. # Azure DevOps work item tracking integration Source: https://octopus.com/docs/releases/issue-tracking/azure-devops.md Octopus integrates with Azure DevOps work items. The integration includes the ability to: - Automatically add links to Azure DevOps work items in your Octopus releases and deployments. - Retrieve release notes from Azure DevOps work item comments for automatic release note generation. ## How Azure DevOps integration works :::figure ![Octopus Azure DevOps integration - how it works diagram](/docs/img/releases/issue-tracking/images/octo-azure-devops-how-it-works.png) ::: :::div{.warning} Azure work items aren't currently supported unless the `BuildEnvironment` is Azure DevOps. ::: 1. Associate code changes with their relevant work items in any of the following ways: - Edit a pull request in Azure DevOps, and use the **Work Items** panel to select a work item. - Edit a work item in Azure DevOps, and use the **Development** panel to add a pull request link (before build), or a commit link, or a build link. - When you commit code. If you enable the repository setting: **[Automatically create links for work items mentioned in a commit comment](https://docs.microsoft.com/en-us/azure/devops/repos/git/repository-settings?view=azure-devops#automatically-create-links-for-work-items-mentioned-in-a-commit-comment)** under Project Settings (Repositories), you can include `#` followed by a valid work item ID in the commit message. For example, `git commit -a -m "Fixing bug #42 in the web client"`. 2. The Octopus Deploy [plugin](/docs/packaging-applications/build-servers) for your build server [pushes the commits to Octopus](/docs/packaging-applications/build-servers/build-information/#passing-build-information-to-octopus). These are associated with a package ID and version (The package can be in the built-in Octopus repository or an external repository). 3. The Azure DevOps Issue Tracker extension in Octopus uses the build information to request work item references from Azure DevOps. :::figure ![Octopus release with Azure DevOps work items](/docs/img/releases/issue-tracking/images/octo-azure-devops-release-details.png) ::: 4. When creating the release which contains the package version, the work items are associated with the release. These are available for use in [release notes](/docs/packaging-applications/build-servers/build-information/#build-info-in-release-notes), and will be visible on [deployments](/docs/releases/deployment-changes). :::figure ![Octopus deployment with generated release notes](/docs/img/releases/issue-tracking/images/octo-azure-devops-release-notes.png) ::: ### Availability {#availability} The ability to push the build information to Octopus, which is required for Azure DevOps integration, is currently only available in the official Octopus plugins: - [JetBrains TeamCity](https://plugins.jetbrains.com/plugin/9038-octopus-deploy-integration) - [Atlassian Bamboo](https://marketplace.atlassian.com/apps/1217235/octopus-deploy-bamboo-add-on?hosting=server&tab=overview) - [Azure DevOps](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) - [Jenkins Octopus Deploy Plugin](https://plugins.jenkins.io/octopusdeploy/). ### Deployment updates not supported {#deployment-updates-unsupported} The Azure DevOps integration does **not support** the ability to have Octopus release and deployment information displayed within Azure DevOps work items. ## Configuring Azure DevOps integration The following steps explain how to integrate Octopus with Azure DevOps: 1. [Configure your build server to push build information to Octopus.](#configure-your-build-server) This is required to allow Octopus to know which work items are associated with a release. 2. [Configure the Azure DevOps connection in Octopus Deploy.](#connect-octopus-to-azure-devops) ## Configure your build server to push build information to Octopus {#configure-your-build-server} To integrate with Azure DevOps work items, Octopus needs to understand which work items are associated with a [release](/docs/releases). Octopus does this by using the build information associated with any packages contained in the release to request work item references from Azure DevOps. To supply the build information: 1. Install one of our official [build server plugins](#availability) with support for our build information step. 2. Update your build process to add and configure the [Octopus Build Information step](/docs/packaging-applications/build-servers/build-information/#build-information-step). :::div{.warning} If you had previously been using the older functionality on the Create Octopus Release step, you should disable all release note options on that step as they use different mechanics and will conflict with the new features. ![Legacy create release settings](/docs/img/releases/issue-tracking/images/octo-azure-devops-create-release-notes-fields.png) ::: ## Connect Octopus to Azure DevOps {#connect-octopus-to-azure-devops} 1. Configure the Azure DevOps Issue Tracker extension in Octopus Deploy. In the Octopus Web Portal, navigate to **Configuration ➜ Settings ➜ Azure DevOps Issue Tracker** 1. Add a Connection, and set the following values: - **Azure DevOps Base URL**. This tells Octopus where the Azure DevOps instance is located. - **Personal Access Token (PAT)**. Unless the Azure DevOps instance is public, you'll need to supply an access token, created in the Azure DevOps User Settings (under Personal access tokens), with authorization to read scopes `Build` and `Work items`. - **Release Note Prefix**. This value is *optional*. If specified, Octopus will look for a work item comment that starts with the given prefix text and use whatever text appears after the prefix as the release note. This will then be available in the [build information](/docs/packaging-applications/build-servers/build-information) as the work item's description. If no comment is found with the prefix then Octopus will default back to using the title for that work item. For example, a prefix of `Release note:` can be used to identify a customer friendly work item title vs a technical feature or bug fix title. :::div{.hint} **Multiple Azure DevOps connections:** If you need to connect to more than one Azure DevOps organization, repeat this step. Support for multiple Azure DevOps connections was added in Octopus **2021.3**. ::: !["Multiple Azure DevOps Issue Tracker connections"](/docs/img/releases/issue-tracking/images/octopus-azure-devops-tracker-multiple-connections.png) 1. Ensure the **Is Enabled** property is enabled When configured, this integration will retrieve Azure DevOps work item details and add details about them in your Octopus releases and deployments. If you ever need to disable the Azure DevOps Issue Tracker extension you can do so under **Configuration ➜ Settings ➜ Azure DevOps Issue Tracker**. ## Learn more - [Build information](/docs/packaging-applications/build-servers/build-information). # AWS Source: https://octopus.com/docs/runbooks/runbook-examples/aws.md Runbooks are a powerful tool that can be used to ensure that infrastructure is created in a repeatable and consistent fashion. Octopus comes with a number of built-in step templates to help interface with the AWS cloud platform: - Run an AWS CLI Script. - Deploy an AWS CloudFormation template. - Apply an AWS CloudFormation Change Set. - Delete an AWS CloudFormation stack. - Upload a package to an AWS S3 bucket. Combining the power of runbooks and AWS, you can easily automate common tasks such as: - Creating resource stacks with CloudFormation. - Managing firewall rules. - Destroying resources. # Create MySQL database Source: https://octopus.com/docs/runbooks/runbook-examples/databases/create-mysql-database.md The ability to create a database in MySQL requires that the user account have elevated permissions to the server. Not only that, the machine that the user is using needs to be specified to their account (unless using %). This can make permissions somewhat unruly to manage. Using a runbook, you can create the database without altering permissions by executing it on the server itself or an approved [worker](/docs/infrastructure/workers). In the following example, we'll use the [MySQL - Create Database If Not Exists](https://library.octopus.com/step-templates/4a222ac3-ff4b-4328-8778-1c44eebdedde/actiontemplate-mysql-create-database-if-not-exists) community step template. ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the Runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a new step template from the community library called **MySQL - Create Database If Not Exists**. 5. Fill out all the parameters in the step. It's best practice to use [variables](/docs/projects/variables) rather than entering the values directly in the step parameters: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Server | Name or IP of the MySQL server | MySQL1 | | Username | Username with rights to create a database | root | | Password | Password for the user account | MyGreatPassword! | | Database Name | Name of the database to create | MyDatabase | | Port | Port number for the MySQL server | 3306 | | Use SSL | Whether to use the SSL protocol | Checked for True, unchecked for False | This will create a database without having to grant any additional permissions to the server. # Updating Windows Source: https://octopus.com/docs/runbooks/runbook-examples/routine/updating-windows.md It's not always possible to use products such as [Microsoft Endpoint Configuration Manager](https://docs.microsoft.com/en-us/mem/configmgr/) (formerly SCCM or Microsoft System Center Configuration Manager) to orchestrate the installation of patches for Windows. This is especially true if your VMs are in the cloud and not connected to your Active Directory. In situations like these, you can take advantage of runbooks and [scheduled runbook triggers](/docs/runbooks/scheduled-runbook-trigger) to periodically check and apply updates to your application infrastructure. ## Create the runbook To create a runbook to perform updates on your Windows machines: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a Name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, and then click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, select **PowerShell** and add the following code: ```powershell function Get-NugetPackageProviderNotInstalled { # See if the nuget package provider has been installed return ($null -eq (Get-PackageProvider -ListAvailable -Name Nuget -ErrorAction SilentlyContinue)) } function Get-ModuleInstalled { # Define parameters param( $PowerShellModuleName ) # Check to see if the module is installed if ($null -ne (Get-Module -ListAvailable -Name $PowerShellModuleName)) { # It is installed return $true } else { # Module not installed return $false } } # Force use of TLS 1.2 [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 # Check to see if the NuGet package provider is installed if ((Get-NugetPackageProviderNotInstalled) -ne $false) { # Display that we need the nuget package provider Write-Host "Nuget package provider not found, installing ..." # Install Nuget package provider Install-PackageProvider -Name Nuget -Force Write-Output "Nuget package provider successfully installed ..." } Write-Output "Checking for PowerShell module PSWindowsUpdate ..." if ((Get-ModuleInstalled -PowerShellModuleName "PSWindowsUpdate") -ne $true) { Write-Output "PSWindowsUpdate not found, installing ..." # Install PSWindowsUpdate Install-Module PSWindowsUpdate -Force Write-Output "Installation of PSWindowsUpdate complete ..." } Write-Output "Checking for updates ..." $windowsUpdates = Get-WindowsUpdate # Check to see if there's anything to install if ($windowsUpdates.Count -gt 0) { Write-Output "Installing updates ..." Install-WindowsUpdate -AcceptAll -AutoReboot } else { Write-Output "There are no updates available." } ``` :::div{.warning} Be aware that the `AutoReboot` switch will reboot the machine after the first update that needs it. If there is more than one update that requires a reboot, you may need to run the above PowerShell again to install the rest of the available updates. ::: With the process defined, you can set the update to execute automatically with a scheduled runbook trigger. In order to create a scheduled runbook trigger, your runbook must first be [published](/docs/runbooks/runbook-publishing). ## Create the trigger 1. To create a trigger, navigate to **Project ➜ Operations ➜ Triggers ➜ Add Scheduled Trigger**. 2. Give the trigger a name and a description. 3. Fill in Trigger Action section: - Runbook: Select the runbook to execute. - Target environments: Select the environment(s) this runbook will execute against. 4. Fill in the **Trigger Schedule** section: - Schedule: Daily | Days per month | Cron expression. 5. Scheduled Timezone: - Select timezone: Select the timezone to use when evaluating when to run. Using this method, you can set it and forget it. ## Samples We have a [Target - Windows](https://oc.to/TargetWindowsSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `OctoFx` project. # Scheduled runbook triggers Source: https://octopus.com/docs/runbooks/scheduled-runbook-trigger.md Scheduled runbook triggers allow you to define an unattended behavior for your [runbook](/docs/runbooks) that will cause an automatic runbook run to environments of your choosing. :::div{.hint} Only published snapshots can be used to create a scheduled runbook trigger, draft snapshots cannot be used to create a scheduled trigger. For config-as-code runbooks, scheduled runbook triggers will always run the runbook from the latest commit on your default branch. ::: ## Schedule Scheduled runbook triggers provide a way to configure your runbooks to run on a defined schedule. This can useful in different scenarios, for instance: - Run a database backup at 1:00am every day. - Run a health check on your service every 30 minutes. - Run a script to reset a test environment every 3 hours. - Run a streaming process every minute. - Run a maintenance script on the last Saturday of the month. - Run a script to provision more machines on the 1st day of the month and a script to deprovision them at a future date. ## Add a scheduled runbook trigger 1. In a project, select **Operations ➜ Triggers**, then **Add Scheduled trigger**. 2. Give the trigger a name. 3. Select a runbook. You can select runbooks in two ways: - **Select specific runbooks**: Choose individual runbooks by name - **Select by tags**: Select all runbooks matching specific [runbook tags](/docs/runbooks#runbook-tags) :::div{.warning} Triggering runbooks by tags is supported from Octopus version **2026.1.7523**. ::: :::div{.hint} When using tags, the trigger will run against all runbooks that match the selected tags at the time the trigger fires. If you add or remove tags from runbooks later, the trigger will automatically include or exclude those runbooks. ::: 4. Specify the target environments the runbook will run against. 5. Set the trigger schedule. The options give you control over how frequently the trigger will run and at what time. You can schedule a trigger based on either days of the week, or dates of the month. You can also use a [CRON expression](#cron-expression) to configure when the trigger will run. If you are using [tenants](/docs/tenants) you can select the tenants that the runbook will run against. For each tenant, the published runbook will run against the tenant's environment. :::div{.hint} If you have steps that use packages in your runbook process we only support getting latest non-prerelease versions. To use prerelease packages you would need to hard-code the version on individual steps. ::: 6. Save the trigger. :::div{.hint} All schedule options run based on CRON expressions. The other options provide a convenient way of setting up the schedule without worrying about the syntax. A custom CRON expression provides you with more fine-grained control over the exact schedule. ::: ### Using CRON expressions {#cron-expression} CRON expressions allow you to configure a trigger that will run according to the specific CRON expression. Example: `0 0 06 * * Mon-Fri` Runs at 06:00 AM, Monday through Friday. :::div{.success} The CRON expression must consist of all 6 fields, there is an optional 7th field for "Year". ::: | Field name | Allowed values | Allowed special characters | Required | | ------------- |:-------------------- |:--------------------------- | :------: | | Seconds | 0-59 | * , - / | Y | | Minutes | 0-59 | * , - / | Y | | Hours | 0-23 | * , - / | Y | | Day of month | 1-31 | * , - / ? L W | Y | | Month | 1-12 or JAN-DEC | * , - / | Y | | Day of week | 0-6 or SUN-SAT | * , - / ? L # | Y | | Year | 1970–2099 | * , - / | N | # Authentication automation with OctopusDSC Source: https://octopus.com/docs/security/authentication/authentication-automation-with-octopusdsc.md OctopusDSC is an in-house open-source PowerShell module with DSC resource designed to reduce the overhead when automating the installation and configuration of your Octopus infrastructure. The following resources are available for automating authentication configuration: - [Octopus Server Active Directory Authentication](https://github.com/OctopusDeploy/OctopusDSC/tree/master/OctopusDSC/DSCResources/cOctopusServerActiveDirectoryAuthentication). - [Octopus Server Azure Active Directory Authentication](https://github.com/OctopusDeploy/OctopusDSC/tree/master/OctopusDSC/DSCResources/cOctopusServerAzureADAuthentication). - [Octopus Server Google Apps Authentication](https://github.com/OctopusDeploy/OctopusDSC/tree/master/OctopusDSC/DSCResources/cOctopusServerGoogleAppsAuthentication). - [Octopus Server Guest Authentication](https://github.com/OctopusDeploy/OctopusDSC/tree/master/OctopusDSC/DSCResources/cOctopusServerGuestAuthentication). - [Octopus Server Okta Authentication](https://github.com/OctopusDeploy/OctopusDSC/tree/master/OctopusDSC/DSCResources/cOctopusServerOktaAuthentication). - [Octopus Server Username and Password Authentication](https://github.com/OctopusDeploy/OctopusDSC/tree/master/OctopusDSC/DSCResources/cOctopusServerUsernamePasswordAuthentication). # Octopus - Tentacle communication Source: https://octopus.com/docs/security/octopus-tentacle-communication.md This page describes how the [Octopus Server](/docs/installation/) and the [Tentacle deployment agents](/docs/infrastructure/deployment-targets/tentacle/windows) communicate in a secure way. ## Background Some deployment technologies are designed for the LAN and have no security at all. Some require machines to be on the same Active Directory domain. Others require you to set up usernames and passwords, and to store them in configuration files. When designing Octopus, we wanted to make it easy to have secure deployments out of the box, without expecting machines to be on the same domain and without sharing passwords. Octopus needed to work in scenarios where the Octopus Server is running in your local LAN, close to your developers, while your production servers are running in the cloud or at a remote data center. We achieve this security using [public-key cryptography](http://en.wikipedia.org/wiki/Public-key_cryptography "Wikipedia article on Public-key cryptography"). ## Octopus/Tentacle trust relationship Regardless of whether Tentacle is in [listening mode](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended) or [polling mode](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles), all communication between the Tentacle and Octopus is performed over a secure ([TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security)) connection. Octopus and Tentacle both have a public/private key pair that they use to establish the TLS connection and verify the identity of the other party. When Tentacle is configured, you give it the thumbprint (which uniquely identifies the public key) of the Octopus Server. Likewise, you tell Octopus the thumbprint of the Tentacle. This establishes a trust relationship between the two machines: 1. Your Octopus Server will only issue commands to the Tentacles that it trusts. 2. Your Tentacles only accept commands from an Octopus they trust. The only way another system can impersonate either party is by getting hold of the private key, which are kept safe and never leave the Octopus/Tentacle server (unless you export them from the certificate store). This makes it much more secure than exchanging passwords. Since this is all based on public-key cryptography, it creates a highly secure way for the two machines to communicate without exchanging passwords, and works much like an SSH connection in the UNIX world. :::div{.hint} If necessary you can further restrict access using IPSec or VPNs. ::: ### Octopus certificates The X.509 certificates used by Octopus and Tentacle are generated on installation and use 2048-bit private keys. There is an insightful discussion of [why Octopus uses self-signed certificates](https://octopus.com/blog/why-self-signed-certificates) by default. :::div{.hint} Instead of having Tentacle generate its own certificate, you can [import a Tentacle certificate](/docs/infrastructure/deployment-targets/tentacle/windows/automating-tentacle-installation/#export-and-import-tentacle-certificates-without-a-profile) which is helpful when [automating Tentacle installation](/docs/infrastructure/deployment-targets/tentacle/windows/automating-tentacle-installation). ::: ### Scenario: Listening Tentacles Tentacle plays the role of server and Octopus as the client: 1. Octopus establishes the TLS connection with the Tentacle. 2. The Tentacle presents its certificate as the server certificate allowing Octopus to verify the identity of the Tentacle. 3. Octopus presents its certificate as a client certificate so the Tentacle can verify the identity of Octopus. 4. Once the identity of the Octopus and Tentacle have been established the connection is held open and Octopus will start issuing commands to the Tentacle. ### Scenario: Polling Tentacles Octopus plays the role of server and Tentacle as the client: 1. The Tentacle establishes the TLS connection with Octopus. 2. Octopus presents its certificate as the server certificate allowing the Tentacle to verify the identity of Octopus. 3. The Tentacle presents its certificate as a client certificate so Octopus can verify the identity of the Tentacle. 4. Once the identity of the Octopus and Tentacle have been established the connection is held open and Octopus will start issuing commands to the Tentacle. ### Data transport Octopus uses [Halibut](https://github.com/OctopusDeploy/Halibut) to communicate. This is based on TCP, not HTTP, it is not possible to add HTTP headers to this communication channel. Both Tentacle and Server expose a simple page on the listening port to web browsers to allow you to confirm your configuration. Some security scanners detect this page and incorrectly assume that it's a web server or a web app and warn about self-signed certificates. ## Transport Layer Security (TLS) implementation Octopus Server and Tentacle use TLS for all communication, with the protocol version and cipher suites negotiated based on the host operating system's cryptographic capabilities. **Protocol Support:** - **TLS 1.2** - Minimum required version - **TLS 1.3** - Supported on modern operating systems (Windows Server 2022+, Windows 11+, and current Linux distributions) Modern versions of Octopus rely on the underlying OS TLS implementation: - **Windows**: Schannel (Windows' native TLS/SSL provider) - **Linux**: OpenSSL The TLS handshake negotiates the strongest mutually supported protocol version and cipher suite. Both peers must support at least one compatible protocol, cipher suite, and signature algorithm to establish a connection. :::div{.warning} **TLS 1.0 and TLS 1.1 are deprecated and insecure.** While legacy Octopus versions may support these protocols if configured at the OS level, they should not be used in production environments. Ensure all systems support at least TLS 1.2. ::: **Configuration Requirements:** For details on the specific protocols, cipher suites, and signature algorithms required for Octopus communication, see [Minimum TLS Requirements](/docs/security/octopus-tentacle-communication/minimum-tls-requirements). If your environment enforces custom TLS hardening policies, ensure they meet the minimum requirements to maintain connectivity between Octopus Server and Tentacle agents. **Hardening TLS:** To further secure your Octopus installation by disabling weak protocols or limiting cipher suites, review our documentation on [Hardening Octopus](/docs/security/hardening-octopus/#disable-weak-tls-protocols). ## Troubleshooting Tentacle communication problems We have built comprehensive troubleshooting guides for both [Listening and Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/troubleshooting-tentacles). If you are seeing error messages like below, try [Troubleshooting Schannel and TLS](/docs/security/octopus-tentacle-communication/troubleshooting-schannel-and-tls): Client-side:`System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception. ---> System.ComponentModel.Win32Exception: One or more of the parameters passed to the function was invalid` Server-side:`System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.` # Assign tags to targets Source: https://octopus.com/docs/tenants/guides/tenants-sharing-machine-targets/assign-tags-to-targets.md Under **Deployment Targets**, we can see three tentacle targets in the Production environment used to host these tenants. Each target is currently hosting five tenants. By associating a Hosting Group tag to each target, the tenants with those tags are automatically associated to the targets. This makes adding or removing a tenant from a target very easy. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/target-list.png) ::: Edit a target and choose the appropriate tag in the **Associated Tenants** section to associate all tenants with that tag to the target. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/target-details.png) ::: Previous     Next # Tenant tags Source: https://octopus.com/docs/tenants/tenant-tags.md :::div{.hint} This page covers how to use tags with tenants. For general information about tag sets, types, and scopes, see [Tag sets](/docs/tenants/tag-sets). From Octopus Cloud version **2025.4.3897** we've introduced **SingleSelect** and **FreeText** tag set types. ::: Tenant tags allow you to: - Find tenants faster using tag filters. - Group a project's deployments overview by tag set. - Deploy to multiple tenants at the same time. - Customize deployment processes for tenants. - Scope project variables to tenant tags. - Design a multi-tenant hosting model - read more in our [tenant infrastructure](/docs/tenants/tenant-infrastructure) section. - Design a multi-tenant deployment process for SaaS applications, regions and more - for further details, see our [guides](/docs/tenants/guides/#guides). - Control which releases can be deployed to tenants using [channels](/docs/releases/channels/) - read more in our [tenant lifecycle](/docs/tenants/tenant-lifecycles) section. ## Tag-based filters {#tag-based-filters} Once you have defined some tag sets and tags you can start leveraging those tags to tailor your environments and deployments. :::div{.hint} **Combinational logic** When filtering tenants, Octopus will combine tags within the same tag set using the **`OR`** operator, and combine tag sets using the **`AND`** operator. ::: Let's take a look at an example: :::figure ![A dialog showing a Tenant preview when selecting different tenant tags](/docs/img/tenants/images/tag-based-filters.png) ::: In this example Octopus will execute a query like the one shown below: ```sql TenantsNamed("Capital Animal Hospital") UNION TenantsTagged(VIP AND (Alpha OR Beta)) ``` When paired with a well-structured tag design, this logic will enable you to tailor your tenanted deployments in interesting and effective ways. :::div{.hint} **Tips for working with tenant filters** - Only specify a tenant "by name" (explicitly) if you absolutely want that tenant included in the result, otherwise leave it blank - A filter with tags in the same tag set will be more inclusive since they are combined using **`OR`** - A filter with tags across different tag sets will become more reductive since they are combined using **`AND`** ::: ## Referencing tenant tags {#referencing-tenant-tags} If you want to use tenant tags to automate Octopus Deploy you should use the **canonical name** for the tag which looks like this: `Tag Set Name/Tag Name` Consider an example deploying a release to the tenants tagged with the **Alpha** tag in the **Release Ring** tag set. :::figure ![A tenant tag of Alpha from the Release Ring tag set is shown highlighting how you should reference it in automation scenarios](/docs/img/tenants/images/release-ring.png) ::: ```powershell # Deploys My Project 1.0.1 to all tenants tagged as in the Alpha ring octopus release deploy --project "My Project" --version "1.0.1" --tenant-tag "Release ring/Alpha" ``` You can use tenant tags when: - Deploying releases using [build server integrations](/docs/octopus-rest-api/) or the [Octopus CLI](/docs/octopus-rest-api/octopus-cli/deploy-release). - Scoping a deployment target to one or more tenants when registering a new Tentacle - read more in our [tenant infrastructure](/docs/tenants/tenant-infrastructure) section. - Automating Octopus via the [Octopus REST API](/docs/octopus-rest-api). For more information about canonical names and how to reference tags, see [Tag sets](/docs/tenants/tag-sets#referencing-tags). ## Deploying to multiple tenants using tags {#deploying-to-multiple-tenants-tags} You can create tag sets specifically to help with deployments and rolling out upgrades. Often, you want to deploy targeted releases to your testers, and once they've finished testing, prove that upgrade with a smaller group of tenants before rolling it out to the rest of your tenants. This is also useful to split up a large number of tenants into smaller groups for deployment. We've outlined the steps to design this process using tenant tags: ### Step 1: Create a tag set called Upgrade Ring {#deploy-step-1-create-tagset} First, create a tag set called **Upgrade Ring** with tags that allow each tenant to choose how early in the development/test cycle they want to receive upgrades. 1. Go to **Deploy ➜ Tag Sets** and create a new tag set called **Upgrade Ring**. 2. Add tags for **Tester**, **Early Adopter**, and **Stable**. 3. Choose colors that highlight different tenants. Learn more about [creating and managing tag sets](/docs/tenants/tag-sets#managing-tag-sets). :::figure ![A dialog showing the creation of a tenant tag set called Upgrade Ring](/docs/img/tenants/images/multi-tenant-upgrade-ring.png) ::: ### Step 2: Configure a test tenant {#deploy-step-2-configure-test-tenant} Either create a new tenant or configure an existing tenant. Tag your test tenant(s) with **Tester** - this tenant will receive upgrades before any other configured tenants. ### Step 3: Configure some early adopter tenants and stable tenants {#deploy-step-3-configure-other-tenants} *Optionally*, configure some external tenants as opting into early or stable releases to see the effect. Find or create some tenants and tag them as either **Stable** or **Early Adopter**. ### Step 4: Deploy {#deploy-step-4-deployment} Now it's time to deploy using tenant tags as a way to select multiple tenants easily. In this example, we will deploy version **1.0.1** to all tenants tagged with **Tester** who are connected to the **Test** environment. You can use multiple tags and complex tag queries to achieve other interesting scenarios. :::figure ![A screenshot showing the deployment preview when selecting tenants tagged with Tester](/docs/img/tenants/images/multi-tenant-deploy-test.png) ::: You can also use the project overview to deploy to groups of tenants by grouping the dashboard, selecting a release, and clicking the **Deploy all...** button. :::figure ![A screenshot showing how you can use the project dashboard to select tenants to deploy using a tenant tag](/docs/img/tenants/images/multi-tenant-deploy-all.png) ::: ## Learn more - [Tag sets](/docs/tenants/tag-sets) - General information about tag sets, types, and scopes - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1) # octopus config set Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-config-set.md Set will write the value for given key to Octopus CLI config file. ```text Usage: octopus config set [key] [value] [flags] Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Delete instances Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/delete-instance.md Use the Delete Instance command to delete an instance of the Octopus service. **Delete Instance options** ``` Usage: octopus.server delete-instance [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example deletes the Octopus Server instance on the machine named `MyNewInstance`: ``` octopus.server delete-instance --instance="MyNewInstance" ``` # octopus deployment-target Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target.md Manage deployment targets in Octopus Deploy ```text Usage: octopus deployment-target [command] Available Commands: azure-web-app Manage Azure Web App deployment targets cloud-region Manage Cloud Region deployment targets delete Delete a deployment target help Help about any command kubernetes Manage Kubernetes deployment targets list List deployment targets listening-tentacle Manage Listening Tentacle deployment targets polling-tentacle Manage Polling Tentacle deployment targets ssh Manage SSH deployment targets view View a deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target azure-web-app Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-azure-web-app.md Manage Azure Web App deployment targets in Octopus Deploy ```text Usage: octopus deployment-target azure-web-app [command] Available Commands: create Create an Azure Web App deployment target help Help about any command list List Azure Web App deployment targets view View an Azure Web App deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target azure-web-app [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target azure-web-app list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target azure-web-app create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-azure-web-app-create.md Create an Azure Web App deployment target in Octopus Deploy ```text Usage: octopus deployment-target azure-web-app create [flags] Aliases: create, new Flags: --account string The name or ID of the Azure account --environment strings Choose at least one environment for the deployment target. -n, --name string A short, memorable, unique name for this Azure Web App. --resource-group string The resource group of the Azure Web App --role strings Choose at least one role that this deployment target will provide (use --tag for tag sets with validation). --tag strings Target tags in canonical format (TagSetName/TagName). --tenant strings Associate the deployment target with tenants --tenant-tag strings Associate the deployment target with tenant tags, should be in the format 'tag set name/tag name' --tenanted-mode string Choose the kind of deployments where this deployment target should be included. Default is 'untenanted' -w, --web Open in web browser --web-app string The name of the Azure Web App for this deployment target --web-app-slot string The name of the Azure Web App Slot for this deployment target --worker-pool string The worker pool for the deployment target, only required if not using the default worker pool Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target azure-web-app create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Rotating the Master Key Source: https://octopus.com/docs/administration/managing-infrastructure/rotate-master-key.md :::div{.hint} The ability to rotate the master key was added in **Octopus 2022.4**. ::: There are times you might want to rotate your master key, for example if you're worried about the existing master key being leaked. This guide walks you through this process. The rotation should have no impact once it's completed. This guide assumes you still have access to your Master Key. You should be able to run the [`show-master-key` command](/docs/octopus-rest-api/octopus.server.exe-command-line/show-master-key/). If you've lost access to the Master Key, please refer to [Recovering after losing your Octopus Server and Master Key](/docs/administration/managing-infrastructure/lost-master-key). ## What is affected by the rotation Octopus [encrypts important and sensitive data](/docs/security/data-encryption) using a Master Key. This includes: - The Octopus Server X.509 certificate which is used for [Octopus to Tentacle communication](/docs/security/octopus-tentacle-communication) - your Tentacles will still trust your Octopus Server after the rotation. - Sensitive variable values, wherever you have defined them. - Sensitive values in your deployment processes, like the password for a custom IIS App Pool user account. - Sensitive values in your deployment targets, like the password for creating [Offline Drops](/docs/infrastructure/deployment-targets/offline-package-drop). - Sensitive values in your process templates, like the default value for a sensitive/password box parameter. ## Rotating the Master Key ### Step 1. Back up before you start Make sure to [back up everything](/docs/administration/data/backup-and-restore) before you start this process. This should also include the `OctopusServer.config` which contains the old master key in case the process fails. ### Step 2. Stop the server Master key rotation currently only works when Octopus server is stopped, as the database will not be in a valid state during the rotation process. In an HA cluster, this means all the server nodes need to be stopped. You can do this with `Octopus.Server service --stop` on each server node. This also ensures anything in memory or in transit is persisted to disk before we start. ### Step 3. Rotate the master key Once everything is backed up and the Octopus Server stopped, the steps are as follows. 1. Run `Octopus.Server rotate-master-key` and follow the prompts. This will guide you through the steps and generate a report at the end of the process. A new master key will also be written to its own file. In an HA setup, this command can be run from any server node. 1. If you have an HA setup, run `Octopus.Server set-master-key --masterKey=NEW_MASTER_KEY` on the other server nodes. 1. You can confirm the new master key is being used by running `Octopus.Server show-master-key`. 1. Run `Octopus.Server service --start` to start the Octopus Server running against the rotated database. :::div{.warning} **Please read the report carefully and get in touch with us if anything seems out of the ordinary. Back up your new Master Key!** ::: Here's the beginning of an example report: ```text ================================================================================ ROTATE MASTER KEY REPORT ================================================================================ New Master Key: Gj3GfVf1gLn8kQA7wX4iXw== Processed 10533 documents. 10009/10533 documents were updated 0/10533 documents were left unchanged due to errors -------------------------------------------------------------------------------- Replaced Master Key -------------------------------------------------------------------------------- Your Master Key has been replaced, and all of your sensitive values updated with new Master Key. -------------------------------------------------------------------------------- 10009/10533 documents were updated -------------------------------------------------------------------------------- ``` ### Step 4. Post-rotation If the rotation goes well, everything should work exactly the same as before. You may want to check - Tentacles are still healthy and can connect to Octopus Server. - Each Server node is reporting as online, and that Octopus is fully functioning. ### Step 5. Back up your Octopus Server certificate and Master Key If not already, now is a great time to securely back up your Master Key and Octopus Server certificate! #### Test your backup Now is a great time to test your backup process worked and ensure you can restore quickly next time when a serious issue occurs. A backup isn't real unless you verify you can restore from it. Take your fresh Octopus backup and recently secured Master Key and attempt to restore your Octopus Server somewhere else to validate it will work when you need it to. # Sync multiple instances Source: https://octopus.com/docs/administration/sync-instances.md Syncing instances involves copying projects and all required scaffolding data between Octopus Deploy instances with different environments, targets, tenants, or even variable values. Each instance has a separate database, storage, and URL. Keeping multiple instances in sync is a complex task involving dozens if not hundreds of decisions across all the projects. This guide will walk you through suitable scenarios, unsuitable scenarios, tooling available, and how to design a syncing process. :::div{.problem} TL;DR; copying projects between instances should be done when all other options have been exhausted. There is no provided tooling to support syncing instances with different environments, tenants, or variable values. Due to the number of decisions and business rules, you will have to create and maintain a custom syncing process. Before making this decision, reach out to [customersuccess@octopus.com](mailto:customersuccess@octopus.com) to see if there are alternatives. ::: ## Suitable scenarios Split and sync instances only when Octopus lacks a critical feature to satisfy a company policy, industry regulation, or a business contract. The use cases we've seen in the past are: - A separate **Dev/Test** instance and a **Staging/Production** instance so developers can have unlimited access to make changes, but **Production** must be locked down because of a business contract. - A primary **Dev/Test/Staging/Production** instance with an isolated **Production** only instance for a set of targets to satisfy a contract requiring an instance hosted in Azure Gov. - A separate instance for a specific set of tenants. Like the above use case, except all the environments are the same, only the tenants are different. The expectation is the source instance is the source of truth and the destination instance(s) contain copies of that data. The syncing process will run periodically to ensure changes made on the source instance are added to the destination instance. :::div{.hint} If you wish to do a one-time split of an instance and have no desire to keep anything in sync afterwards, then we recommend the [Export/Import Projects](/docs/projects/export-import) feature. ::: ## Consider alternatives As you will soon see, a syncing process is complex an requires constant care and maintenance. Even if we provided a built-in tool, you'd still need to monitor and maintain the process. Below are the common reasons we hear for splitting an instance. Please reach out to reach out to [customersuccess@octopus.com](mailto:customersuccess@octopus.com) if your use case is not mentioned and you'd like to discuss alternatives. :::div{.hint} We've been asked if splitting environments, tenants or deployment targets by space is a safer alternative. [Spaces](/docs/administration/spaces) are hard walls and do not allow the sharing of environments, projects, variable sets, step templates, script modules, deployment targets and more. For all intents and purposes, a space is a unique instance. Any problems you encounter when syncing instances will happen when trying to sync spaces. ::: ### Built-in role-based access control In talking to users, the primary reason for splitting an instance is due to permissions. Before doing splitting an instance, research the built-in role-based access control as it supports a variety of common permissions use cases. - Developers can modify the deployment process, deploy to **Development**, **Test**, and **Staging** but not **Production**. - System admins can modify deployment targets in Octopus and deploy to **Production**, but cannot modify a deployment process. - An engineering team is permitted to modify the set of projects they own. All other projects are read-only. - Release managers can modify the variables on a set of tenants assigned to them. All other tenants are read-only. ### Approval process Another reason we hear about is needing an approval process for changes to the deployment process. Please see our [config as code feature](/docs/projects/version-control) as that integrates with git, which allows for branching and pull requests. ### Performance improvement The final reason we hear about is to "speed up the deployment." We typically hear this when Octopus is located in one data center and deployment targets are located in a data center in another country or continent. That can lead to long package acquisition from the built-in repository and latency. - If package acquisition is taking a long time to transfer to the targets, consider: - Enabling [delta compression for package transfers](/docs/deployments/packages/delta-compression-for-package-transfers) to reduce the amount of data to transfer. - Leveraging an external feed such as Artifactory, GitHub Packages, AWS CodeArtifact, or Feedz.io and configure Octopus to download the packages directly from the external feeds. - If there appears to be latency when running scripts on the Octopus Server to make database changes, run e2e tests, or any other similar task, then leverage [workers](/docs/infrastructure/workers). Workers can execute tasks that don't need to run on individual deployment targets. They can be located in the same data center as your database or applications. ## Unsuitable Scenarios Do not split an instance and sync it for any of the following use cases. - You want an approval process for any changes to your deployment process. Please see our [config as code feature](/docs/projects/version-control) as that integrates with git. - You want to move a project from the default space to another space on the same instance (or different instance). Please see our documentation on our [Export/Import Projects feature](/docs/projects/export-import). - You want to create a test instance to test out upgrades or try out new processes. Please see our guide on [creating a test instance](/docs/administration/upgrading/guide/creating-test-instance) - You want to upgrade the underlying VM hosting Octopus Deploy from Windows Server 2012 to Windows Server 2019. Please see our guide on [moving the Octopus Server](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-server). - You want to move the SQL Server database from SQL Server 2012 to SQL Server 2019. Please see our guide on [moving the Octopus Database](/docs/administration/managing-infrastructure/moving-your-octopus/move-the-server). - You want to migrate from self-hosted Octopus to Octopus Cloud. Please see our [migration guide](/docs/octopus-cloud/migrations/) on how to leverage the [Export/Import Projects feature](/docs/projects/export-import) to accomplish this. - You want to consolidate multiple Octopus Deploy instances into a single Octopus Deploy instance. Please see our documentation on our [Export/Import Projects feature](/docs/projects/export-import). ## Syncing is not cloning Syncing is not the same as cloning. Cloning an instance will result in an exact replica (or copy) of data from the source. In addition to having all the same targets, environments, variables, tenants, projects, etc., the unique identifiers stored in the Octopus database will be the same; including the Server thumbprint and database master key. Cloning is typically a one-time operation, such as standing up a new server. Syncing instances involves copying projects and all required scaffolding data between Octopus Deploy instances with different environments, accounts, lifecycles, targets, tenants, or even variable values. Each instance will have different ids, Server thumbprint, and database master key. ## Tools and features to avoid Unfortunately, there is not first-class tooling available to support syncing two instances, due to the many decisions and business rules when working with different environments, tenants, variable values, etc. In the past, our users have attempted to repurpose provided features and tooling to support their syncing process. However, they were not designed for syncing use cases; the result was often frustration because of lack of customization, or hand editing files causing corrupted projects. ### Migrator and Export/Import Project The [Migrator](/docs/administration/data/data-migration/) and the [Export/Import Project](/docs/projects/export-import) feature were designed to migrate or clone a project to another instance (or space for Export/Import Project). The primary use case for both tools is that a user wants to move a project to a new instance and deprecate the older instance. For example, when migrating from a self-hosted Octopus Server to Octopus Cloud. The Migrator and Export/Import Project feature can be run multiple times for the same project. But they will ensure the source and destination instances match. There is no way to exclude specific environments, tenants, or any specific data you wish to keep separate. While it is possible to modify the JSON exported by those tools, such an approach is error-prone and unsupported. ### Octopus CLI The [Octopus CLI](/docs/octopus-rest-api/octopus-cli/) includes the [export](/docs/octopus-rest-api/octopus-cli/export/) and [import](/docs/octopus-rest-api/octopus-cli/import) commands. Those commands are deprecated and should not be used. ### Config as Code and Octopus Terraform Provider Terraform uses Hashicorp Configuration Language or HCL. The [Config as Code feature](/docs/projects/version-control) uses Octopus Configuration Language (OCL) and that is based on HCL. HCL does not support complex logic. That means you'd need a unique set of files per instance. To sync instances using these features, you'd need to use a comparison tool such as Beyond Compare to move changes between instances manually. Anything manual is error-prone and will eventually fail. You can write a tool to compare files between instances automatically and make the necessary modifications. You will run into the a lot of the same roadblocks as below as you'll need to consider dependencies, environment mis-matches, and more. ## Tooling to use We recommend creating a custom tool that leverages the [Octopus Deploy REST API](/docs/octopus-rest-api), or one of the API wrappers, such as the [Octopus.Client .NET library](https://github.com/OctopusDeploy/OctopusClients), [Octopus Go API Client](https://github.com/OctopusDeploy/go-octopusdeploy), or the [TypeScript API Client](https://github.com/OctopusDeploy/api-client.ts). We make that recommendation because, as you'll soon see, there are a lot of business rules and decisions to make. The Octopus team has written a sample PowerShell tool, [SpaceCloner](https://github.com/OctopusDeployLabs/SpaceCloner), you can use as a reference or example for your syncing process. A lot of this documentation used lessons from writing that tool. While the SpaceCloner supports syncing instances with a known delta, we recommend using that tool as a guide. It was created with specific use cases in mind and probably won't support your hyper-specific use case. ## Syncing process If you do determine the best course of action is to sync projects across multiple Octopus Deploy instances, then you will need to start designing a syncing process. While the actual business rules and decisions will vary between implementations, the core rules for any syncing process will remain the same. :::div{.warning} As the syncing process requires the use of the Octopus Deploy REST API, or one of the API wrappers, you should be comfortable with the Octopus Deploy data model and API endpoint structure before starting. ::: ### Avoid mismatched versions It is possible to take JSON data retrieved via a `GET` request on an instance running 2020.1, make some modifications, and then `POST` that data to an instance running 2021.3. But there is no guarantee that the data model will be the same between versions. Something could've changed, a new required property, a property type was updated, or any other reason a model changes when there are over 12 months between releases. The risk of error is directly correlated to the delta between versions—the greater the delta, the greater the risk. :::div{.hint} In late 2020 an engineering effort was made to move from NancyFX to ASP.NET for the Octopus Deploy API Controllers. Since that conversion started, missing or additional previously tolerated fields will now cause a 400 bad request error. Looking at the SpaceCloner code, you will see several invocations of an "add field if missing" method because of a model change. ::: A rule of thumb to follow is don't have instances more than one minor version apart. For example, the source instance runs **Octopus 2021.2**, and the destination instance runs **Octopus 2021.3**. Ideally, all instances would be on the same `Major.Minor` version. Do not sync between major releases, for example the source instance runs **Octopus 2020.6** and the destination instance is on **Octopus 2021.3**. If you run into unexpected 400 bad request errors, the typical remediation is to upgrade both instances to the same version. ### Have a single source of truth It is much easier to sync everything in a one-way direction. The source instance should remain the source instance every time the syncing process runs. It shouldn't be the source instance one day, and the destination instance the next day. It will be nearly impossible to know which instance is "right" and whether the change should be accepted when a conflict is found. For example, both instances have a step added to the same deployment process; on one instance, it is a new manual intervention step, while in the other instance, it is a run a script step. Should both exist? Only one copied over? You'd need a comparison tool similar to Beyond Compare to reconcile this conflict. It's okay to have known differences between the instances, such as different environments, lifecycles, variable values, tenants, deployment targets, channels, and more. But when something new is added, such as a new variable or step, it should be done on one instance and synced to the other instance. It is hard enough to detect when something is "new". A one-way sync will help keep conflicts to a minimum, and reduce complexity. ### Data to sync Octopus Deploy is more than a deployment process and variables. A lot of scaffolding data is needed for everything to work correctly. The syncing process should allow for the syncing of the following data: - Infrastructure - Accounts - Environments - Worker Pools - Library - Certificates - External Feeds - Lifecycles - Script Modules - Step Templates - Variable Sets - Tenant Tags - Packages - Server Configuration - Teams - User Roles - Tenants - Project Groups - Projects - Deployment Process - Runbooks and Runbook Processes - Variables - Channels - Settings ### Matching by name Each instance will have different database identifiers. For example, Project "OctoFx" in the source instance can have an id of `Projects-1234` while the destination instance's project id is `Projects-7789`. That means all data matching between instances must be by name. What makes that complex is that items such as lifecycles, environments, accounts, etc., are referred to by id in projects, deployment process steps, and more. For example, a project has a default lifecycle. When syncing any project, the process will need to: 1. Translate the lifecycle id on the project to the lifecycle name using data from the source instance. 2. Translate the lifecycle name to the lifecycle id on the destination instance. 3. Update the project's default lifecycle id before saving it on the destination instance. That complexity is further exacerbated by the fact that some data is required, for example, a project's default lifecycle, while other data is not, for example, a step scoped to an environment. ### Data that must be an exact match The following items must be an exact match between your instances. Otherwise, you'll get missing data errors, corrupted projects, 400 bad requests or unexpected results from deployments or runbook runs. - Script Modules - Step Templates - Tenant Tags ### Data with the same name but different details Most of the data referenced by your deployment and runbook processes have to exist, but the details of the data can be different. For example, a deployment process references the worker pool **Ubuntu Worker Pool**. On the source instance that worker pool could have 5 EC2 instances in a West Coast region, while the destination instance could have 3 EC2 instances in a different region. As long as the worker pool **Ubuntu Worker Pool** exists in both instances with running workers, everything will work fine. The data that must exist but can have different details is: - Infrastructure - Accounts: Same account type, different credentials - Worker Pools: Different workers - Machine Policies: Different health check frequency, tentacle, and calamari update settings - Library - Certificates: same certificate type, different cert - External Feeds: same feed type, different credentials - Lifecycles: different phases and retention policies - Server Configuration - Teams: user role mapping, different members - Project Groups: You don't have to sync all the projects in a project group; only the project group has to exist. Variables can be used instead of directly referencing that data in a deployment or runbook process. The deployment process will work as long as the variable exists, is of the correct type, and is assigned to something that exists on the destination instance. For example, a variable named `Project.AWS.Account` refers to an account called `Dev/Test Account` on the source instance. That same variable can refer to the `Staging/Prod Account` on the destination instance. Items that can be variables are: - Infrastructure - Accounts - Worker Pools - Library - Certificates ### Data not to sync The astute reader will note that the above list of data items is not ALL the data stored in Octopus Deploy. It is possible to sync the following data, though we recommend against it. - Infrastructure - Deployment Targets - Workers - Machine Proxies - System Configuration - System Settings (server folders, SMTP, etc.) - External Auth Providers - Issue Trackers - Users - Subscriptions - System Level Teams One of the primary reasons for having separate instances is to isolate deployment targets and workers. Besides, each tentacle would have to be configured to trust all the instances. Items in the system configuration list are set and forget. Or, set and update once a year. Most of that data requires admin-level permission. It is too risky to include system configuration in a syncing process. ### Syncing Packages Packages size can be substantial; we've seen some that are 1 GB+ in size. Syncing files of that size between instances can take a tremendous amount of time. We recommend excluding them from your syncing process and instead do one of the following: - Switch over to an external feed such as Feedz.io, Artifactory, Azure Artifacts, etc. - Update your build server integration to push to multiple instances. ### Data you cannot sync The Octopus Deploy REST API is powerful, but it has intentional limits put in place for security and auditability. Namely, it doesn't return sensitive information or allow you to modify audit history. Because of that, you cannot sync: - Sensitive Data - Sensitive Variables - External Feed Credentials - User Passwords - Infrastructure Account Credentials - Audit Data - Releases - Runbook Snapshots - Deployments - Runbook Runs - Audit history - Task history For sensitive data, decrypting and returning that data via the API has never been built into Octopus Deploy. We have considered it, but there are several security concerns, and we'd want to ensure they were all met or mitigated before adding such functionality. For auditability, Octopus Deploy prevents the updating of any auditable / snapshot data as well as limiting how that data is created. For example, the release endpoint accepts a `POST` command. In addition to creating a release, the variables, package versions, and deployment process is snapshotted using data as it exists at that point in time. That prevents the ability to sync a release created six months ago via the Octopus Deploy REST API. If you did, it wouldn't use the variables and deployment process from six months ago; it would use the variables and deployment process as it exists right now on the destination instance. A deployment, and runbook run, have the same limitation. Issuing a `POST` command to those endpoints will trigger a deployment or a runbook run. You cannot copy the task history, artifacts, or task logs via the Octopus Deploy REST API. That is because that deployment or runbook run you are attempting to sync _did not happen_ on the destination instance, only the source instance. Not only that, the associated releases and runbook snapshots will have different variable values and deployment processes. What this comes back to is auditability. If that data can be modified by any outside process then it is not auditable. ### Syncing order In our experience, it is far easier to group data by type and sync them all together. For example, sync all the Project Groups before syncing Projects. That requires an order of precedence in syncing due to data dependencies. That order of precedence is: - No Dependencies, can be done in any order - Environments - Project Groups - Tenant Tags - External Feeds - Teams (exclude any scoped permissions on creation) - Machine Policies - Worker Pools (not workers) - Dependencies, order matters - Infrastructure Accounts - Step Templates - Script Modules - Lifecycles - Tenants (Tenant name only) - Workers - Targets - Certificates - Variable Sets - Projects - Project Settings - Channels (no channel version rules) - Deployment Process - Runbooks and Runbook Processes - Variables - Channel Version Rules - Release Version Strategy - Built-in package repository trigger - Logo - Tenants - Tenant / Project relationship - Variables - Team Permissions ### Keep a log of data to clean-up The syncing process will encounter data it can't sync precisely because of a limitation, for example, a sensitive variable. The variable name must exist on both the source and destination instances because it is referenced by the Deployment or Runbook process. But the value will be different. If the sensitive variable name doesn't exist on the destination instance, it should create the variable but insert dummy data. When that occurs, a log entry should be written to know that data needs to be clean-up once the syncing process is complete. **Please Note:** The syncing process should only do that only for new data. Any existing data where it can't match and sync exactly should be left alone. For example, if the sensitive variable already exists on the destination instance. ### Log everything Log everything, from API calls, before and after logic gates, to lookups. There is no such thing as too much logging with a syncing process. The syncing process is translating JSON data from one instance to another. You are going to find it fail in strange and unexpected ways. There are many business rules, meaning there are a lot of potential bugs. ## Handling different environments There is a ripple effect when the source and destination instance have different environments. These are the three possible use cases for instances with different environments. - The destination instance has a subset of environments of the source instance. For example, **Dev/Test/Staging/Production** on the source instance and **Staging/Production** on the destination. - The destination and source instance has a different set of environments. For example, **Dev/Test** on the source instance and **Staging/Production** on the destination. - It is a combo of both use cases. For example, **Dev/Test/Staging** on the source and **Staging/Production** on the destination. One or more environments are excluded from the syncing process in all those use cases. Consider all the data that can reference an environment: - Infrastructure - Accounts - Deployment Targets - Library - Certificates - Lifecycles - Variable Sets - Tenants - Projects - Channels - Deployment Process Steps - Runbook Process Steps - Variables ### Lifecycles and Channels Different environments per instance mean different lifecycles, which can mean different channels, and channels can be scoped to deployment process steps and variables. For example, there are two lifecycles on the source instance: - Default lifecycle with **Development**, **Test**, **Staging** and **Production**. - QA lifecycle with **Development** and **Test**. The project to be synced also has two channels on the source instance: - Default channel - QA channel - A manual intervention step is scoped to only run on the QA channel. - A variable value is scoped to the QA channel. The destination instance has **Staging** and **Production**. It makes sense to include the Default lifecycle and channel and exclude the QA lifecycle and channel in the sync for this scenario. Excluding the QA lifecycle and channel has a cascading effect. - What should happen to the manual intervention step scoped to the QA channel? In this scenario, it makes sense to exclude it. However, what if the step ran an E2E test that QA wants to run on the other instance? - What should happen to the variable value scoped to the QA channel? If the manual intervention step only uses it, it makes sense to exclude it, but if other steps use it, should that value be synced across or something else? ### Tenants In Octopus Deploy, a Tenant is tied to both a Project and 1 to N Environments. There are two data items to consider when mixing tenants and environments: - Not all tenants should be synced between instances. It is common to have a different list of tenants on each instance. There could be some overlap (typically with test tenants) or no overlap. - The environments a tenant is scoped to for a project affect the Tenant Variables to copy over. For example, the source instance has **Development**, **Test**, **Staging** and **Production** and the destination instance has **Staging** and **Production**. On the source instance, the Tenant **Internal** tied to all four environments and another tenant **Development Team A** is tied to **Development** and **Test**. For this scenario, should both tenants be cloned? It makes sense to sync the **Internal** Tenant and tie it to **Staging** and **Production** on the destination instance and not sync **Development Team A**. For the **Internal** Tenant, should all the variables in **Staging** and **Production** for that Tenant be copied over as well? Or, should you have to manually enter the values as they'll be different between instances? ### Deployment and Runbook Processes A common use for environments is to scope them to steps in Deployment and Runbook Processes. For example, the source instance has **Development** and **Test** with a manual intervention step scoped to **Test**. That step pauses a deployment so the QA team can review the newly deployed release before starting their integration tests. Should that step be included with the destination instance if it has **Staging** and **Production**? There are valid use cases for both yes and no. - Yes, QA runs integration tests in **Staging**. It should be cloned but scoped to run in **Staging**. - No, QA only runs integration tests in **Test**. - Yes, QA needs to sign off on each release before promoting to **Production**. It should be cloned but scoped to only run in **Staging**. - Yes, QA needs to sign off on each release in each environment after **Development**. It should be cloned but configured to run in any environment. ### Accounts and Certificates Both Accounts and Certificates are referenced by variables in either projects or variable sets. They can be scoped to environments directly or by environment variable scoping. Best practices recommend different AWS, GCP, and Azure accounts per environment or instance. For example, an AWS account for **Non-Production** workloads and another account for **Production**. - Certificates can either be a wildcard, for example, `*.octopusdemos.app`, or tied to a specific domain and subdomain, for example, `mail.octopusdemos.app`. We typically see either a different domain per environment, `testoctopusdemos.app` and `octopusdemos.app` or different subdomains, `test.octopusdemos.app` and `octopusdemos.app`. In most cases, it doesn't make much sense to sync Accounts and Certificates. But the variables referencing the Accounts and Certificates are used in Deployment and Runbook processes. You have a few options: - Create an Account or Certificate with the same name on each instance but different details. You don't have to modify any variables. - Re-use the same variable name but associate it to different Accounts or Certificates. The syncing process will only create new variables and insert dummy data. ### Variable scoping Syncing variables between instances with different environments is very complex due to scoping and variable types. For all the examples below, the source instance has **Development** and **Test** while the destination instance has **Staging** and **Production**. The source instance has the following variables. | Variable Name | Value | Scope | | ------------------------- | ------------ | ----------- | | Application.Database.Name | `OctoFx-Dev ` | Development | | | `OctoFX-Test` | Test | | ConnectionString | `Database=#{Application.Database.Name}` | | Syncing `ConnectionString` as-is to the destination instance makes sense as it has no scoping. You'll need to sync over the variable `Application.Database.Name` as `ConnectionString` references it. But what about the values? Those values are tied to environments that do not exist on the destination instance. There are a couple of options. Clone the variable values as-is with no environment scoping and then change the values after the sync. That is useful when the destination instance has unique values per environment. The initial sync would look like this: | Variable Name | Value | Scope | | ------------------------- | ------------ | ----------- | | Application.Database.Name | `OctoFx-Dev` | | | | `OctoFX-Test` | | | ConnectionString | `Database=#{Application.Database.Name}` | | The downside to the above option is knowing the values to clean up. Another option is to copy the variable name with text indicating that it needed replacing and exclude any scoping. The initial sync would look like this: | Variable Name | Value | Scope | | ------------------------- | ------------ | ----------- | | Application.Database.Name | `Replace Me` | | | ConnectionString | `Database=#{Application.Database.Name}` | | That only covers the initial sync. Recurring syncs will add additional challenges, as there will be a combination of: - Changed variable values - Changed variable scoping on existing variables - Additional variable values - New variables - Unchanged variables The syncing process will need to compare the variable lists and determine the status of each variable. The hardest challenge to solve is calculating which variables are "new" vs. existing with updated scoping. That is because a variable can be scoped to 0 to N of these items: - Environments - Channels - Process Owner (deployment process or runbooks) - Deployment Process Steps - Roles - Deployment Targets - Tenant Tags Some of those items, Tenant Tags, Roles, Process Owner, will be _exactly_ alike between instances. While other items, Environments, Channels, Deployment Process Steps, Deployment Targets, will be different. For example, the variables on the source instance were changed from: | Variable Name | Value | Scope | | ------------------------- | ------------ | ----------- | | Application.Database.Name | `OctoFx-Dev` | Development | | | `OctoFX-Test` | Test | | ConnectionString | `Database=#{Application.Database.Name}` | | To this: | Variable Name | Value | Scope | | ------------------------- | ------------ | ----------- | | Application.Database.Name | `OctoFx-#{Octopus.Environment.Name}` | | | ConnectionString | `Database=#{Application.Database.Name}` | SQL-Server (role) | | | `Data Source=#{Application.Database.Name}` | Oracle (role) | The desired result on the destination instance will be: - Leave `Application.Database.Name` alone. - Add the scoping of `SQL-Server` to the `ConnectionString` variable with the value `Database=#{Application.Database.Name}`. - Add an additional `ConnectionString` option with the value set to `Data Source=#{Application.Database.Name}` and the scoping set to `Oracle`. Variable syncing is so complex because the rules for each project and variable scoping mismatch are different. The syncing process will need some logic in it that you can configure. - Should you ever overwrite values on the destination instance? - How to handle scoping mismatches. - Options - Skip the variable unless the scoping is an exact match. For example, a variable on Dev/Test is scoped to the role `OctoFX-Web`; any variables with the same name would be ignored unless it is also scoped to `OctoFX-Web`. - Skip the variable unless the scoping is a partial match. For example, a variable is scoped to **Staging** and **Production** on a source instance with **Dev/Test/Staging/Prod**. The destination instance only has **Production**. Include the variable value because it is scope to one of the environments. - Ignore mismatch. Clone the value regardless of the scoping. - Ignore mismatch on new but leave existing alone. If the variable is new, ignore the mismatch; otherwise, leave everything as is. - Potential Scoping Mismatches - Environments: guaranteed to be different. - Channels: will most likely be different. - Process Owners: will be the same (unless runbooks are not synced). - Deployment Process Steps: will most likely be the same (depends on the rules for syncing the deployment process) - Roles: will be the same. - Deployment Targets: guaranteed to be different. - Tenant Tags: will most likely be the same. ## Handling different Tenants It is common to have a different list of Tenants on each Octopus Deploy instance. Unlike Environments, a different Tenant list is much easier to support in the syncing process. Tenants are directly tied to the following in Octopus Deploy: - Deployment Targets - Accounts - Certificates - Team User Role Scoping - Projects ### Deployment Targets If a Tenant isn't synced between instances, chances are the deployment targets associated with that Tenant do not need to be synced either. Deployment and Runbook processes do not reference deployment targets directly. Instead, they are referenced by the roles tied to the deployment target. The one thing to watch out for is that it is possible to scope variables to specific deployment targets. In that case, the best option is to skip the variable value unless there is a partial match. For example, `Application.Database.Name` has the value `OctoFX-Tenant` scoped to `Machine1`, `Machine2`, and `Machine3` on the source instance. That value should be skipped unless the destination instance has one of those deployment targets. ### Accounts and Certificates Both Accounts and Certificates are referenced by variables in either projects or variable sets. It is common to have different Accounts and Certificates per Tenant. In most cases, it doesn't make much sense to sync Accounts and Certificates. But the variables referencing the Accounts and Certificates are used in Deployment and Runbook processes. In this case, the best option is to re-use the same variable name but associate it with different Accounts or Certificates. ### Team User Role scoping The Team user role scoping is used for permissions. For example, a team has access to edit a specific set of Tenants. In this case, it makes sense to exclude any Tenants not found on the destination instance from the team user role scope. It won't hurt anything as that Tenant doesn't exist. Most likely, the list of Tenants that particular team has permission to edit will be very different. ### Projects The Tenant/Project relationship indicates that Tenant can deploy releases from that project to a specific set of Environments. There is no impact if the Tenant doesn't exist on the destination instance. The Tenant/Project relationship won't exist. ## Handling different Deployment Targets It is common to have a different list of Deployment Targets on each Octopus Deploy instance. Deployment Targets are not referenced anywhere directly within Octopus Deploy outside of variable scoping. All other references are indirect; the role associated with a Deployment Target is how it is tied to a Deployment or Runbook process. A Deployment or Runbook process can reference a role not associated with any Deployment Targets. Each project has a setting to tell Octopus what to do when that happens, fail the deployment, or let it skip steps associated with that role. For variable scoping, the best option is to skip the variable value unless there is a partial match. For example, `Application.Database.Name` has the value `OctoFX-Tenant` scoped to `Machine1`, `Machine2`, and `Machine3` on the source instance. That value should be skipped unless the destination instance has one of those deployment targets. ## Ongoing maintenance Each new major/minor release of Octopus Deploy will require significant testing of your syncing process. Along with fixing any bugs found because of an unexpected edge case. As you can see with all the rules and decisions from above, this is a complex problem requiring time and money to solve. We only recommend syncing and splitting two instances when no other options are available. # octopus deployment-target azure-web-app list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-azure-web-app-list.md List Azure Web App deployment targets in Octopus Deploy ```text Usage: octopus deployment-target azure-web-app list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target azure-web-app list octopus deployment-target azure-web-app ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # NPM feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/npm-feeds.md :::div{.success} NPM feeds are supported from version **Octopus 2026.1.7997**. ::: NPM repositories can be configured as an external feed in Octopus Deploy, allowing you to consume packages from npmjs.com or private NPM registries such as Nexus Repository Manager and JFrog Artifactory. ## Adding an external NPM feed The following steps can be followed to add an external NPM feed. 1. Navigate to **Deploy ➜ Manage ➜ External Feeds** and click the **ADD FEED** button. 2. Select **NPM Feed** from the **Feed Type** field. 3. Enter a descriptive name for the feed in the **Feed name** field. 4. In the **Feed URL** field, enter the URL of the NPM registry. Common examples include: - Public NPM registry: `https://registry.npmjs.org` - Nexus Repository Manager: `https://your-nexus-server/repository/npm-hosted/` - JFrog Artifactory: `https://your-artifactory-server/artifactory/api/npm/npm-repo/` 5. If the NPM registry requires authentication, enter the credentials in the **Feed login** and **Feed password** fields. For token-based authentication (common with private registries), use the token as the password. 6. The **Download attempts** field defines the number of times that Octopus will attempt to download a package from the NPM registry. Failed attempts will wait for the number of seconds defined in the **Download retry backoff** field before attempting to download the package again. 7. Click **Save and test** to verify the feed configuration. :::figure ![NPM Feed configuration dialog showing feed name, URL, and authentication fields](/docs/img/packaging-applications/package-repositories/images/npm-add-external-feed.png) ::: ## Authentication NPM feeds support several authentication methods: ### Public registries For public registries like npmjs.com, authentication is optional. You can leave the credentials fields blank to access public packages. ### Username/Password Most NPM registries support basic authentication with username and password. Enter these directly into the **Feed login** and **Feed password** fields. ### Nexus Repository Manager For Nexus repositories: 1. Use your Nexus username in the **Feed login** field. 2. Use your Nexus password in the **Feed password** field. 3. Alternatively, you can use an [NPM Bearer Token](https://help.sonatype.com/repomanager3/nexus-repository-administration/user-authentication/user-tokens) generated from Nexus. ### JFrog Artifactory For Artifactory repositories: 1. Use your Artifactory username in the **Feed login** field. 2. In the **Feed password** field, you can use either: - Your Artifactory password - An [Access Token](https://jfrog.com/help/r/jfrog-platform-administration-documentation/access-tokens) generated from Artifactory - An API Key (if enabled in your Artifactory instance) ## Referencing NPM packages When referencing an NPM package in Octopus Deploy, use the package name as it appears in the NPM registry. For scoped packages, include the scope in the package name. Examples: - Unscoped package: `express` - Scoped package: `@octopusdeploy/example-package` - Organization scoped: `@myorg/my-package` ## Versioning with NPM feeds NPM packages use [semantic versioning (SemVer)](https://semver.org/). Octopus Deploy supports the standard SemVer format: `MAJOR.MINOR.PATCH`. Pre-release versions are also supported, following the SemVer specification with identifiers such as: - `1.0.0-alpha` - `1.0.0-beta.1` - `1.0.0-rc.2` ## Testing an NPM feed After adding an NPM feed, you can verify it's working correctly: 1. Click the **TEST** button on the feed configuration page. 2. Search for a known package in your NPM registry. 3. Verify that packages are displayed and version information is correct. :::figure ![NPM Feed test page displaying search results with package names and versions](/docs/img/packaging-applications/package-repositories/images/npm-search-packages.png) ::: ## Troubleshooting NPM feeds ### Connection issues If you cannot connect to your NPM registry: 1. Verify the feed URL is correct and accessible from the Octopus Server. 2. Check that authentication credentials are valid. 3. Ensure any required network access (firewall rules, proxy settings) is configured. 4. For Nexus or Artifactory, verify the repository is online and the repository path is correct. ### Authentication failures If authentication is failing: 1. Confirm your credentials haven't expired. 2. For Artifactory, ensure your API key or access token has the necessary permissions. 3. For Nexus, verify that NPM realm is properly configured if using token authentication. 4. Test authentication using the NPM CLI with the same credentials: ```bash npm login --registry=https://your-registry-url npm view package-name ``` ### Package not found If a package cannot be found: 1. Verify the package name is spelled correctly, including any scope. 2. Confirm the package exists in the registry and isn't private (if you're using anonymous access). 3. For scoped packages, ensure you're using the full package name including the `@scope/` prefix. ### Performance considerations For large NPM registries or when dealing with many packages: 1. Consider using a caching proxy or mirror closer to your Octopus Server. 2. Adjust the **Download attempts** and **Download retry backoff** settings if you experience timeouts. 3. Monitor network bandwidth if packages are large or frequently downloaded. ## Learn more - [NPM documentation](https://docs.npmjs.com/) - [Working with scoped packages](https://docs.npmjs.com/cli/v8/using-npm/scope) - [About NPM registry](https://docs.npmjs.com/about-the-public-npm-registry) # octopus deployment-target azure-web-app view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-azure-web-app-view.md View an Azure Web App deployment target in Octopus Deploy ```text Usage: octopus deployment-target azure-web-app view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target azure-web-app view 'Shop Api' octopus deployment-target azure-web-app view Machines-100 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target cloud-region Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-cloud-region.md Manage Cloud Region deployment targets in Octopus Deploy ```text Usage: octopus deployment-target cloud-region [command] Available Commands: create Create a Cloud Region deployment target help Help about any command list List Cloud Region deployment targets view View a Cloud Region deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target cloud-region [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target cloud-region list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target cloud-region create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-cloud-region-create.md Create a Cloud Region deployment target in Octopus Deploy ```text Usage: octopus deployment-target cloud-region create [flags] Aliases: create, new Flags: --environment strings Choose at least one environment for the deployment target. -n, --name string A short, memorable, unique name for this Cloud Region. --role strings Choose at least one role that this deployment target will provide (use --tag for tag sets with validation). --tag strings Target tags in canonical format (TagSetName/TagName). --tenant strings Associate the deployment target with tenants --tenant-tag strings Associate the deployment target with tenant tags, should be in the format 'tag set name/tag name' --tenanted-mode string Choose the kind of deployments where this deployment target should be included. Default is 'untenanted' -w, --web Open in web browser --worker-pool string The worker pool for the deployment target, only required if not using the default worker pool Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target cloud-region create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target cloud-region list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-cloud-region-list.md List Cloud Region deployment targets in Octopus Deploy ```text Usage: octopus deployment-target cloud-region list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target cloud-region list octopus deployment-target cloud-region ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Auto Scaling High Availability Nodes Source: https://octopus.com/docs/administration/high-availability/auto-scaling-high-availability-nodes.md Cloud providers, such as AWS and Azure, provide the ability to scale out and scale in virtual machines automatically. It's possible to leverage that technology to automatically add and remove nodes from your Octopus High Availability cluster, but there are a few pitfalls to note. :::div{.warning} At this time, we don't recommend auto-scaling if you are using polling tentacles. Polling tentacles must poll _all_ the nodes of your High Availability cluster. That requires [additional configuration](/docs/administration/high-availability/polling-tentacles-with-ha). Attempting to perform that additional configuration using auto-scaling can result in frustration and errors. ::: ## Adding new nodes with Scale-out events A scale-out event is when auto-scaling technology decides it is time to create a new virtual machine via a schedule or a metric-based trigger. Octopus High Availability is designed for new nodes to come online at random intervals. When you create a new node, an entry is added to the `OctopusServerNodes` table with a default task cap of `5`. That node will start picking up tasks to process within 60 seconds. A scale-out event is treated no differently than a person manually creating a VM and configuring as a new node via the UI. The sections below will walk you through _how_ to automate adding new nodes via a script. ### Downloading Octopus Deploy All nodes in the Octopus High Availability cluster must be running the same version of Octopus Deploy. The server version is returned from the API by going to `[ServerURL]/api`, for example, `https://samples.octopus.app/api`. :::figure ![the version number from the api](/docs/img/administration/high-availability/configure/images/retrieve-version-from-api.png) ::: :::div{.hint} Unlike most API calls, the `/api` endpoint does not require an API key. ::: Once you have the version, you can then download that from the Octopus Deploy website (or use [Chocolatey](https://chocolatey.org)). The URL to use is `https://download.octopusdeploy.com/octopus/Octopus.[Version]-x64.msi`, for example: `https://download.octopusdeploy.com/octopus/Octopus.2021.2.7650-x64.msi`. Below is a sample script to download and install Octopus Deploy.
PowerShell (MSI) ```powershell $yourInstanceUrl = "https://samples.octopus.app" $downloadLocation = "C:\Temp" $apiInfo = Invoke-RestMethod "$yourInstanceUrl/api" $versionToDownload = $apiInfo.Version $msiFileName = "Octopus.$versionToDownload-x64.msi" $downloadUrl = "https://download.octopusdeploy.com/octopus/$msiFileName" $downloadFileName = "$downloadLocation\$msiFileName" ## Bits transfer is much faster than Invoke-RestMethod or Invoke-WebRequest for downloading files Write-Output "Downloading $downloadUrl to $outfile" Start-BitsTransfer -source $downloadUrl -destination $outfile $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i $downloadFileNam /quiet" -Wait -PassThru).ExitCode Write-Output "Server MSI installer returned exit code $msiExitCode" ```
PowerShell (Chocolatey) ```powershell $yourInstanceUrl = "https://samples.octopus.app" $apiInfo = Invoke-RestMethod "$yourInstanceUrl/api" $versionToDownload = $apiInfo.Version choco install octopusdeploy --version=$versionToDownload -y ```
### Configuring the new node After Octopus Deploy is installed, you'll need to configure it to point to your high availability cluster. To configure the node you'll need: - Octopus Master Key - Database Server Name - Database Password The shared folder settings for BLOB storage are stored in the database. :::div{.hint} If you are using a mapped network drive, you'll need to configure that prior to configuring Octopus Deploy. :::
PowerShell (With Active Directory) ```powershell $masterKey = "YOUR MASTER KEY" $databaseServer = "YOUR DATABASE SERVER" $databaseName = "YOUR DATABASE NAME" $userName = "YOUR DOMAIN SERVICE ACCOUNT USER NAME" $password = "YOUR DOMAIN SERVICE ACCOUNT PASSWORD" $taskCapSize = "5" ##set this to 0 for UI-only nodes! ## Add your network mapping script here! & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" create-instance --instance "Default" --config "C:\Octopus\OctopusServer.config" & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" database --instance "Default" --masterKey "$masterKey" --connectionString "Data Source=$databaseServer;Initial Catalog=$databaseName;Integrated Security=True;" & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" configure --instance "Default" --webForceSSL "False" --webListenPrefixes "http://localhost:80/" --commsListenPort "10943" & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" service --instance "Default" --stop & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" service --instance "Default" --user "$userName" --password "$password" --install --reconfigure --start & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" node --instance "Default" --taskCap=$taskCapSize & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" service --instance "Default" --restart ```
PowerShell (No Active Directory) ```powershell $masterKey = "YOUR MASTER KEY" $databaseServer = "YOUR DATABASE SERVER" $databaseName = "YOUR DATABASE NAME" $userName = "YOUR DB USER NAME" $password = "YOUR DB PASSWORD" $taskCapSize = "5" ##set this to 0 for UI-only nodes! ## Add your network mapping script here! & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" create-instance --instance "Default" --config "C:\Octopus\OctopusServer.config" & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" database --instance "Default" --masterKey "$masterKey" --connectionString "Data Source=$databaseServer;Initial Catalog=$databaseName;Integrated Security=False;User ID=$userName;Password=$password" & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" configure --instance "Default" --webForceSSL "False" --webListenPrefixes "http://localhost:80/" --commsListenPort "10943" & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" service --instance "Default" --stop & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" service --instance "Default" --user "$userName" --password "$password" --install --reconfigure --start & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" node --instance "Default" --taskCap=$taskCapSize & "C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" service --instance "Default" --restart ```
### Potential pitfalls Generally, adding a new node to an Octopus High Availability cluster will fail for one the following reasons: 1. Unable to connect to the database. 1. Unable to connect to the shared file storage. 1. Unable to connect to listening tentacles (both targets and workers). 1. Unable to connect to cloud providers. If you are going to use auto-scaling technologies, we recommend adding each node with the task cap set to 0 at first and not adding it to a load balancer. When the task cap is 0, the node will not pick up any tasks. That will allow you to test your scripts and new node without affecting anyone. On the new server: 1. If the Octopus Web Portal won't load or shows an error, there is a problem connecting to the database or something is preventing the server from starting. 1. If none of the deployment logs appear when viewing the Octopus Web Portal on that node, there is a problem connecting to the file store. 1. Run the **SAVE AND TEST** on any cloud accounts and external feeds to ensure the new node can connect. 1. Remote into the VM and run this PowerShell script. It will attempt to connect to one of your tentacles. ```powershell $tentacleHost = "127.0.0.1" # REPLACE WITH A HOST NAME OF ONE OF YOUR TENTACLES" [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 if (-not ([System.Management.Automation.PSTypeName]'ServerCertificateValidationCallback').Type) { $certCallback = @" using System; using System.Net; using System.Net.Security; using System.Security.Cryptography.X509Certificates; public class ServerCertificateValidationCallback { public static void Ignore() { if(ServicePointManager.ServerCertificateValidationCallback ==null) { ServicePointManager.ServerCertificateValidationCallback += delegate ( Object obj, X509Certificate certificate, X509Chain chain, SslPolicyErrors errors ) { return true; }; } } } "@ Add-Type $certCallback } [ServerCertificateValidationCallback]::Ignore() $url = "https://$($tentacleHost):10933" try{ Write-Highlight "Attempting to hit the server $url" $result = Invoke-RestMethod $url -TimeoutSec 10 Write-Highlight "Found tentacle" } catch { Throw "Unable to find the tentacle at $url" } ``` ## Removing nodes with Scale-in events A scale-in event is when the auto-scaling technology decides it is time to delete the virtual machine via a schedule or a metric-based trigger. ### Removing a task node A task node is a node where the task cap is greater than 0. By default, all nodes are task nodes because the default task cap is 5. However, you can configure UI-Only nodes by setting the task cap to 0. When the underlying VM hosting a task node is deleted, the following will happen: - Any in-process tasks, including deployments and runbook runs, will fail but will still appear as being executed. - The node will stop updating the `OctopusServerNodes` table. - After 60 minutes, the in-process tasks that failed will be marked as canceled by another node in the High Availability cluster. While High Availability was designed to add nodes quickly, it was not designed to delete nodes quickly. The assumption was made when a node went offline; it was for a server restart. It was not designed to handle scale-in events from an auto-scaling technology automatically. Auto-scaling technologies don't let you run scripts directly on virtual machines as they are being deleted. They will typically publish a message you can process. Because of that, you'll need to leverage the [Octopus Deploy REST API](/docs/octopus-rest-api) to do the following: - Enable drain mode on the node. While that is enabled, it will prevent the node from picking up new tasks and will attempt to finish in-process tasks. - Wait until either the node is marked offline or all tasks have finished processing. - Cancel any in-process tasks if the node is marked offline and there are tasks processing. - Delete the node from the `OctopusServerNodes` table. :::div{.warning} The user required to run this script will need `Administrator` rights to your cluster. We recommend creating a [service account](/docs/security/users-and-teams/service-accounts) and store that API Key securely. ::: ```powershell $octopusUrl = "https://your-octopus-url" $octopusApiKey = "API-YOUR-KEY" $nodeName = "Name of node to delete" function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose $body Write-OctopusInformation "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } Write-OctopusVerbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-OctopusCritical "Unauthorized error returned from $url, please verify API key and try again." } elseif ($_.Exception.Response.statusCode -eq 403) { Write-OctopusCritical "Forbidden error returned from $url, please verify API key and try again." } else { Write-OctopusVerbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-OctopusVerbose $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details." } function Get-OctopusNode { param ( $nodeName, $octopusUrl, $octopusApiKey ) $nodeList = Invoke-OctopusApi -endPoint "octopusservernodes/summary" -spaceId $null -octopusUrl $octopusUrl -apiKey $octopusApiKey -method "GET" $node = $nodeList.Nodes | Where-Object {$_.Name.ToLower().Trim() -eq $nodeName.ToLower().Trim() } if ($null -eq $node) { Write-Error "Unable to find node $nodeName. Exiting." Exit 1 } return $node } function Get-IsNodeActive { param ($nodeInformation) $isActive = $true if ($null -eq $nodeInformation.LastSeen) { Write-OctopusInformation "The node has not been seen in a long time, the node last seen time is null." $isActive = $false } else { $currentTime = Get-Date $nodeLastSeen = [DateTime]$nodeInformation.LastSeen $timeDiff = ($currentTime - $nodeLastSeen) if ($timeDiff.TotalMinutes -gt 5) { Write-OctopusInformation "The node has not checked in in over 5 minutes. The node is no longer active." $isActive = $false } } return $isActive } $nodeInformation = Get-OctopusNode -nodeName $nodeName -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $isActive = Get-IsNodeActive -nodeInformation $nodeInformation if ($isActive -eq $true) { if ($nodeInformation.IsInMaintenanceMode -eq $false) { $drainNodeRequest = @{ Id = $nodeInformation.Id Name = $nodeInformation.Name MaxConcurrentTasks = $nodeInformation.MaxConcurrentTasks IsInMaintenanceMode = $true } $nodeInformation = Invoke-OctopusApi -endPoint "octopusservernodes//$($nodeInformation.Id)" -method "PUT" -apiKey $octopusApiKey -octopusUrl $octopusUrl -spaceId $null -item $drainNodeRequest } while ($isActive -eq $true) { $nodeInformation = Get-OctopusNode -nodeName $nodeName -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $isActive = Get-IsNodeActive -nodeInformation $nodeInformation if ($isActive -eq $true) { if ($nodeInformation.RunningTaskCount -le 0) { Write-OctopusInformation "The node has finished processing all the tasks, and the drain mode is set to enabled. It can be deleted now." break } } } } Write-OctopusInformation "Cancelling all active tasks for that node" $activeTasks = Invoke-OctopusApi -endPoint "tasks?skip=0&node=$($nodeInformation.Id))&states=Executing%2CCancelling&spaces=all&includeSystem=true&skip=0&take=$($nodeInformation.MaxConcurrentTasks)" foreach ($task in $activeTasks) { Write-OctopusInformation "Cancelling $($task.Id)" Invoke-OctopusApi -endPoint "/tasks/$($task.Id)/cancel" -OctopusUrl $octopusUrl -apiKey $octopusApiKey -spaceId $task.SpaceId -method "POST" } Write-OctopusInformation "The node can be safely removed from Octopus now. Deleting." Invoke-OctopusApi -endPoint "octopusservernodes/$($nodeInformation.Id)" -method "DELETE" -apiKey $octopusApiKey -octopusUrl $octopusUrl -spaceId $null ``` ### Removing a UI-only node Removing a UI-only node is no different than removing a web server from a web farm. The Octopus Deploy UI is stateless; your steps to remove a UI-only node are: - Remove the VM from the load balancer. - Delete the VM. # Polling Tentacles with HA Source: https://octopus.com/docs/administration/high-availability/polling-tentacles-with-ha.md Listening Tentacles require no special configuration for Octopus High Availability. Polling Tentacles and Kubernetes agents, however, poll a server at regular intervals to check if there are any tasks waiting for the Tentacle to perform. In a High Availability scenario Polling Tentacles must poll all Octopus Server nodes in your configuration. To configure the Kubernetes agent with Octopus High Availability, see [Kubernetes agent HA Cluster Support](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/ha-cluster-support). ## Connecting Polling Tentacles While a Tentacle could poll a load balancer in an Octopus High Availability cluster, there is a risk, depending on your load balancer configuration, that the Tentacle will not poll all servers in a timely manner. We recommend two options when configuring Polling Tentacles to connect to your Octopus High Availability cluster: - Using a **unique address**, and the same listening port (`10943` by default) for each node. - Using the same address and a **unique port** for each node. These are discussed further in the next sections. ### Using a unique address In this scenario, no load balancer is required. Instead, each Octopus node would be configured to listen on the same port (`10943` by default) for inbound traffic. In addition, each node would be able to be reached directly by your Polling Tentacle on a unique address for the node. For each node in your HA cluster: - Ensure the communication port Octopus listens on (`10943` by default) is open, including any firewall. - Register the node with the [Poll Server command line](/docs/octopus-rest-api/tentacle.exe-command-line/poll-server) option. Specify the unique address for the node, including the listening port. For example, in a three-node cluster: - Node1 would use address: **Octo1.domain.com:10943** - Node2 would use address: **Octo2.domain.com:10943** - Node3 would use address: **Octo3.domain.com:10943** The important thing to remember is that each node should be using a **unique address** and the **same port**. :::div{.hint} **Tip:** A Polling Tentacle will connect to the Octopus Rest API over ports 80 or 443 when it is registering itself with the Octopus Server. After that, it will connect over port `10943` (by default) with the Octopus Server node. It's important to ensure that any firewalls also allow port 80 or 443 for the initial Tentacle registration. ::: ### Using a unique port In this scenario, a type of [Network Address Translation (NAT)](https://en.wikipedia.org/wiki/Network_address_translation) is leveraged by using the same address and **unique ports**, usually routed through a load balancer or other network device. Each Octopus node would be configured to listen on a different port (starting at `10943` by default) for inbound traffic. :::div{.hint} The advantage of using unique ports is that the Polling Tentacle doesn't need to know each node's address, only the port. The address translation is handled by the load balancer. This allows each node to have a private IP address, with no public access from outside your network required. ::: Imagine a three-node HA cluster. For each one, we expose a different port to listen on using the [Octopus.Server configure command](/docs/octopus-rest-api/octopus.server.exe-command-line/configure): - Node1 - Port `10943` - Node2 - Port `10944` - Node3 - Port `10945` Next on the load balancer, create Network Address Translation (NAT) rules and point them to each node in your HA Cluster: - Open port `10943` and route traffic to **Node1** in your HA Cluster - Open port `10944` and route traffic to **Node2** in your HA Cluster - Open port `10945` and route traffic to **Node3** in your HA Cluster - Continue for any additional nodes in your HA cluster. If you configured your nodes to use a different listening port, replace `10943`-`10945` with your port range. The important thing to remember is that each node should be using the **same address** and a **different port**. ## Registering Polling Tentacles There are two options to add Octopus Servers to a Polling Tentacle, via the command-line or via editing the Tentacle.config file directly. **Command line:** Configuring a Polling Tentacle via the command-line is the preferred option with the command executed once per server; an example command using the default instance can be seen below: ``` C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=https://your-octopus-url --apikey=API-YOUR-KEY ``` For more information on this command please refer to the [Tentacle Poll Server command line options](/docs/octopus-rest-api/tentacle.exe-command-line/poll-server). **Tentacle.config:** Alternatively you can edit Tentacle.config directly to add each Octopus Server (this is interpreted as a JSON array of servers). This method is not recommended as the Tentacle service for each server will need to be restarted to accept incoming connections via this method. ```xml [ {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.160:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"}, {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.161:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"}, {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.162:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"} ] ``` :::div{.hint} Notice there is an address entry for **each** Octopus Server in the High Availability configuration. ::: # Managing infrastructure Source: https://octopus.com/docs/administration/managing-infrastructure.md This section provides details about managing your Octopus infrastructure. - [Maintenance mode](/docs/administration/managing-infrastructure/maintenance-mode) - [Applying Operating System upgrades](/docs/administration/managing-infrastructure/applying-operating-system-upgrades) - [Recovering after losing your Octopus Server and Master Key](/docs/administration/managing-infrastructure/lost-master-key) - [Rotating the Master Key](/docs/administration/managing-infrastructure/rotate-master-key) - [Performance](/docs/administration/managing-infrastructure/performance) - [Run multiple processes on a target simultaneously](/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously) - [Managing multiple instances](/docs/administration/managing-infrastructure/managing-multiple-instances) - [Automating infrastructure with DSC](/docs/administration/managing-infrastructure/octopus-dsc) - [Server configuration](/docs/administration/managing-infrastructure/server-configuration) - [Script Console](/docs/administration/managing-infrastructure/script-console) - [Moving your Octopus components to other servers](/docs/administration/managing-infrastructure/moving-your-octopus) - [Server configuration and file storage](/docs/administration/managing-infrastructure/server-configuration-and-file-storage) - [Tentacle configuration and file storage](/docs/administration/managing-infrastructure/tentacle-configuration-and-file-storage/manually-uninstall-tentacle) - [Subscriptions](/docs/administration/managing-infrastructure/subscriptions) - [Show configuration](/docs/administration/managing-infrastructure/show-configuration) - [Service watchdog](/docs/administration/managing-infrastructure/service-watchdog) - [Diagnostics](/docs/administration/managing-infrastructure/diagnostics) # Projects and Project Groups Structure Source: https://octopus.com/docs/best-practices/deployments/projects-and-project-groups.md [Projects](/docs/projects) store the deployment configuration for an application. For each project, you can define a deployment process and runbooks to manage your infrastructure, variables, the environments where the software is deployed, and your software releases. Project groups allow you to group like projects together. ## Project structure We recommend thinking of projects and project groups this way: - Project Group = Software Suite - Project = Application An application represents all the tightly coupled components required for the software to run. Some examples of applications are: - A microservice running in a container monitoring a queue for work. - An N-Tier Web Application with a WebUI, WebAPI, back-end Service, and Database. - A back-end service that processes files from a file share based on a schedule. - A monolithic application with dozens of components. All the components in a single "solution" or built in the same configuration should be deployed together. The deployment process should always deploy all the components. Trying to skip a component because it "didn't change" can reduce deployment time but increases the risk of bugs or failures because something was missed. If you want to have a project per component, you need to ensure each component is decoupled from one another and can be deployed on a separate schedule. :::div{.hint} Previous versions of this guide recommended having a project per component. Octopus Deploy now includes new features, including ITSM integration, Config as Code, and more options for variable run conditions. There is also a logistical overhead with a project per component. That recommendation was made in 2021. At that time, a project per component made sense. It is no longer applicable with the 2023 version of Octopus Deploy. ::: ## Antipatterns to avoid A project should deploy all the coupled components of an application (WebUI, WebAPI, Service, Database). Some common antipatterns we've seen you should avoid are: - A project per component in an application. If the components are referenced in the same "solution" or built in the same build configuration, they need to be deployed together. - A project per application, per environment, such as `OctoPetShop_Dev`, `OctoPetShop_Test`, and so on. That is impossible to maintain and track versions. - A project per customer or physical location, such as `OctoPetShop_AustinEast`, `OctoPetShop_AustinWest`, and so on. This is impossible to maintain, you'd need a syncing process for all projects. Use [multi-tenancy](/docs/tenants) instead. ## Cumulative changes Octopus Deploy expects any application component it deploys to contain everything that the component needs. If you are deploying a web application, the deployment should include all the JavaScript, CSS, binaries, HTML files, etc., needed to run that web application. It shouldn't just be a delta change of a few HTML files or binaries. Octopus Deploy expects that for a variety of reasons. - All releases will need to be deployed to all environments. - Deploying only delta changes requires you to always deploy all versions in a specific order. - If a new deployment target (web server) is created, you will have to deploy all versions to that new target rather than the latest. - You'll need a mechanism to create roll-up releases; otherwise, the list of versions to deploy when a new target is added will grow and become unwieldy. - It'll be near impossible to roll back to a previous version of the code. ## Further reading For further reading on projects and project groups in Octopus Deploy, please see: - [Projects](/docs/projects) - [Multi-Tenancy](/docs/tenants) # Connecting securely with Azure Active Directory Source: https://octopus.com/docs/deployments/azure/service-fabric/connecting-securely-with-azure-active-directory.md As part of Service Fabric step templates, Octopus allows you to securely connect to a secure cluster by using Azure Active Directory (AAD). This page assumes you have configured your Service Fabric cluster in secure mode and have already configured your primary/server certificate when setting up the cluster (and have used an Azure Key Vault to store the server certificate thumbprint). :::div{.warning} This example assumes you are using Azure to host your Service Fabric cluster and AAD. ::: During a Service Fabric deployment that uses AAD for authentication, Calamari will set the following connection parameters before attempting to connect with the Service Fabric cluster: ```powershell $ClusterConnectionParameters["ServerCertThumbprint"] = $OctopusFabricServerCertThumbprint $ClusterConnectionParameters["AzureActiveDirectory"] = $true $ClusterConnectionParameters["SecurityToken"] = $AccessToken # Where $AccessToken is obtained through an earlier PowerShell call using the following variables: # # $OctopusFabricAadUserCredentialUsername # $OctopusFabricAadUserCredentialPassword # ``` These PowerShell variables correspond to the following Octopus variables: | PowerShell Variable | Octopus Variable | | ---------------------------------------- | ------------------------------------------------------ | | $OctopusFabricAadUserCredentialUsername | Octopus.Action.ServiceFabric.AadUserCredentialUsername | | $OctopusFabricAadUserCredentialPassword | Octopus.Action.ServiceFabric.AadUserCredentialPassword | | $OctopusFabricServerCertThumbprint | Octopus.Action.ServiceFabric.ServerCertThumbprint | It is these values and variables that we will be discussing below. ## Step 1: Configure the Service Fabric cluster to use Azure Active Directory The Azure Portal supports adding an AAD user to an AAD app (ie. a Service Fabric cluster application). So Octopus can authenticate using AAD with user credentials _(NOTE: At the time of writing (March 22nd, 2017), user credentials are the only supported method of authentication with SF and AAD. Client application credentials are not yet supported)_. We therefore need to setup an AAD user and grant them permissions to access our Service Fabric cluster, via an AAD app. This section will discuss how to do this. For a Service Fabric cluster to be able to see our AAD user, we need to setup some AAD applications (a _cluster_ application and a _client_ application) and assign an AAD user to our cluster application. This process is made easier with scripts. Luckily for us, Microsoft have published an article on how to do exactly what we need, titled [Securing an Azure Service Fabric cluster with Azure Active Directory via the Azure Portal](https://blogs.msdn.microsoft.com/ncdevguy/2017/01/09/securing-an-azure-service-fabric-cluster-with-azure-active-directory-via-the-azure-portal-2/). This article includes some sample scripts that could be used to customize and help setup your own cluster and client applications. We leave this as an exercise for the reader. After running through these scripts, we end up with the following AAD app registrations: - a cluster application - a client application ## Step 2: Configure an Azure Active Directory user that Octopus can connect with during deployments Now that we have configured our Service Fabric cluster to use AAD, we can assign an AAD user to our Service Fabric cluster application. In the Azure Active Directory: - Create a user that you will use for deploying to your Service Fabric cluster. - Log into the Azure Portal with this user (so you get past any temporary password shenanigans). Once we know this user is valid and can log in, we can proceed again to your Azure Active Directory in the Azure Portal: - Go to **App registrations**. - Select your cluster application that you setup earlier. - Click on the link for **Managed Application In Local Directory**. - Click **Users and groups**. - Proceed to add your deployment user to your application, with the role of Admin. Make note of this user's username (_not_ their display name) and password. The format of an AAD username is typically something like this: `my-user@my-azure-directory.onmicrosoft.com` We can then configure our deployment step to connect to our Service Fabric cluster using these user credentials. ## Step 3: Configure and run a deployment step In Octopus, Service Fabric deployment steps that use "Azure Active Directory" as the security mode will need you to enter the username and password of the AAD user who has access to your SF cluster application. Octopus will use these user credentials to obtain an `AccessToken` that it will then pass as the `SecurityToken` when connecting to your Service Fabric cluster. :::figure ![](/docs/img/deployments/azure/service-fabric/connecting-securely-with-azure-active-directory/secure-aad-template.png) ::: ## Connection troubleshooting Calamari uses the [Connect-ServiceFabricCluster cmdlet](https://docs.microsoft.com/en-us/powershell/module/servicefabric/connect-servicefabriccluster) to connect to your Service Fabric cluster. The connection parameters are logged (Verbose) at the time of a deployment to help if you need to debug connection problems to your Service Fabric cluster. If you wish to learn more about how Octopus connects securely to Service Fabric clusters, the PowerShell scripts used by Calamari can be [viewed here](https://github.com/OctopusDeploy/Sashimi.AzureServiceFabric/blob/main/source/Calamari/Scripts/AzureServiceFabricContext.ps1). ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). # Import certificates into WildFly and JBoss EAP Source: https://octopus.com/docs/deployments/certificates/wildfly-certificate-import.md With the `Configure certificate for WildFly or EAP` step, certificates managed by Octopus can be configured as part of a WildFly of Red Hat JBoss EAP domain or standalone instance to allow HTTPS traffic to be served. ## Prerequisites If a new KeyStore is to be created as part of the deployment, the certificate being deployed must be referenced by a variable. [Add a certificate to Octopus](/docs/deployments/certificates/add-certificate/) provides instructions on how to add a new certificate to the Octopus library, and [Certificate variables](/docs/projects/variables/certificate-variables) provides instructions on how to define a certificate variable. ## Common connection settings Regardless of whether you are deploying a certificate to a standalone or domain instance, there are a number of common connection settings that need to be defined in the `Application Server Details` section. Set the `Management host or IP` field to the address that the WildFly management interface is listening to. This value is relative to the target machine that is performing the deployment. Since the target machine performing the deployment is typically the same machine hosting the application server, this value will usually be `localhost`. Set the `Management port` to the port bound to the WildFly management interface. For WildFly 10+ and JBoss EAP 7+, this will default to `9990`. For JBoss EAP 6, this will default to `9999`. The `Management protocol` field defines the protocol to be used when interacting with the management interface. For WildFly 10+ and JBoss EAP 7+, this will default to `http-remoting` or `remote+http` (the two are equivalent). For JBoss EAP 6, this will default to `remoting`. If you wish to use [silent authentication](https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.2/html/security_guide/chap-network_security#Secure_the_Management_Interfaces), and have configured the required permissions for the `$JBOSS_HOME/standalone/tmp/auth` or `$JBOSS_HOME/domain/tmp/auth` directory, then the `Management user` and `Management password` fields can be left blank. Alternatively these fields can hold the credentials that were configured via the `add-user` script. ## Deploying a certificate to a standalone instance Selecting `Standalone` from the `Standalone or domain Server` field in the `Server Type Details` section indicates that the certificate is to be deployed to a standalone server instance. :::div{.hint} Selecting the wrong server type will result in an error at deploy time. ::: When configuring a certificate with a standalone instance, you have the choice of configuring an existing Java KeyStore, or creating a new KeyStore from a certificate managed by Octopus. The options available in the `Server Type Details` section will change depending on how the certificate is deployed. ### Creating a new Java KeyStore By selecting the `Create a new KeyStore` option, Octopus will create a new Java KeyStore file that will then be configured in the application server. The `Select certificate variable` field is used to define the variable that references the certificate to be deployed as a Java KeyStore. The location of the new KeyStore file can be optionally defined in the `KeyStore filename` field. Any path specified in this field must be an absolute path, and any existing file at that location will be overwritten. If left blank, a KeyStore will created with a unique filename based on the certificate subject in the application server `standalone/configuration` directory. The `Private key password` field defines a custom password for the new KeyStore file. If this field is left blank, the KeyStore will be configured with the default password of `changeit`. The `KeyStore alias` field defines a custom alias under which the certificate and private key are stored. If left blank, the default alias of `Octopus` will be used. ### Referencing an existing KeyStore When `Reference an existing KeyStore` is selected, a number of fields are required to define the location and properties of the existing KeyStore that is being referenced. #### Defining the keystore file name The value of the `KeyStore filename` field can either be the absolute path to the KeyStore (in which case the `Relative base path` option has to be set to `none`), or it can be a path relative to one of the locations defined in the `Relative base path` field. For example, if you wish the to reference an existing KeyStore file at `/opt/my.store`, set the `KeyStore filename` field to `/opt/my.store` and the `Relative base path` option to `none`. If you want to reference a KeyStore file in the `standalone/configuration` directory with a filename of `my.store`, set the `KeyStore filename` field to `my.store` and set the `Relative base path` field to `jboss.server.config.dir`. #### Setting the KeyStore password and alias The `Private key password` field defines a custom password for the existing KeyStore file. If this field is left blank, the KeyStore is assumed to have the default password of `changeit`. The `KeyStore alias` field defines a custom alias under which the certificate and private key are stored. If left blank, the KeyStore is assumed to have the default alias of `Octopus`. ## Deploying a certificate to a domain Domains can be used to distribute the configuration required to access a KeyStore, but can not be used to distribute the KeyStore files themselves. Since each slave in the domain needs to have access to the KeyStore file, configuring certificates is therefor a two-step process: 1. Deploying a KeyStore file to all slave instances. 2. Configuring the profiles managed by the domain controller to reference the KeyStore files. ### Deploying KeyStore files The `Deploy a KeyStore to the filesystem` step can be used to take a certificate managed by Octopus and save it as a Java KeyStore on the target machine. The `Select certificate variable` field is used to define the variable that references the certificate to be deployed. The location of the new KeyStore file must be defined in the `KeyStore filename` field. This must be an absolute path, and any existing file at that location will be overwritten. The `Private key password` field defines a custom password for the new KeyStore file. If this field is left blank, the KeyStore will be configured with the default password of `changeit`. The `KeyStore alias` field defines a custom alias under which the certificate and private key are stored. If left blank, the default alias of `Octopus` will be used. :::div{.hint} It is highly recommended that the KeyStore file be saved in the `domain/configuration` directory. This allows the KeyStore file to be referenced using a relative path against the base path identified by `jboss.domain.config.dir`. ::: ### Configuring the domain Once all the domain slaves have a local copy of the KeyStore file deployed to them, the domain profiles can be configured to reference these files. Selecting `Domain` from the `Standalone or domain server` field in the `Server Type Details` section indicates that the certificate is to be configured as part of a WildFly or JBoss EAP domain. The `Domain profiles` field defines a comma separated list of profiles that will be updated to reference the existing KeyStore file. Typical profiles names include `default`, `ha`, `full` and `full-ha`. The `KeyStore filename` is either the absolute path to the existing KeyStore file (in which case the `Relative base path` field has to be set to `none`), or is a relative path using the value of the `Relative base path` field as the base. The `Private key password` field defines the optional password used to access the existing KeyStore file. If this field is left blank, the KeyStore is assumed to have the default password of `changeit`. The `KeyStore alias` field defines the optional alias under which the certificate and private key are stored. If left blank, the KeyStore is assumed to have a default alias of `Octopus`. ## Advanced options :::div{.hint} If you are unsure what these advanced values refer to, it is best to leave them blank and assume the default values. ::: The `Advanced Options` section is the same whether deploying to a domain or standalone instance. The fields in this section can be used to override the default values used when configuring a KeyStore in WildFly or JBoss EAP. The `HTTPS socket binding name` can be used to override the default socket binding that will be used to expose access to HTTPS. The default value is `https`. This value refers to the `name` attribute in the `` elements in the `domain/configuration/domain.xml` or `standalone/configuration/standalone.xml` files. The default configuration for the `standard-sockets` socket binding group is shown below, and shows that the `https` socket binding uses port 8443 by default. This is the same port and socket binding name used by all default socket binding groups. ```xml ``` The `Legacy security realm name` defines the name of the security realm that is configured in application servers that do not support the `Elytron` subsystem. If left blank, this value will default to `OctopusHttps`. :::figure ![Security Realm](/docs/img/deployments/certificates/images/security-realm.png) ::: :::div{.hint} Elytron is the new security subsystem introduced with WildFly 11 and JBoss EAP 7.1. All previous versions of WildFly and JBoss EAP use what is referred to as the "legacy" security system. ::: The `Elytron key store name` defines the name of the Elytron Key Store in application servers that support the `Elytron` subsystem. If left blank, this value defaults to `OctopusHttpsKS`. :::figure ![Elyton Key Store](/docs/img/deployments/certificates/images/elytron-keystore.png) ::: The `Elytron key manager name` defines the name of the Elytron Key Manager in application servers that support the `Elytron` subsystem. If left blank, this value defaults to `OctopusHttpsKM`. :::figure ![Elyton Key Manager](/docs/img/deployments/certificates/images/elytron-keymanager.png) ::: The `Elytron server SSL context name` defines the name of the Elytron SSL Context name in application servers that support the `Elytron` subsystem. If left blank, this value defaults to `OctopusHttpsSSC`. :::figure ![Elyton Server SSL Context](/docs/img/deployments/certificates/images/elytron-ssl-context.png) ::: :::div{.hint} You can find more information of the Elytron subsystem components in the [WildFly documentation](https://docs.jboss.org/author/display/WFLY/Elytron%20Subsystem.html). ::: ## Configuration file backups Before any changes are made to the WildFly or JBoss EAP configurations, a `:take-snapshot` command is run. This will create a backup file in the `domain/configuration/standalone_xml_history/snapshot` or `standalone/configuration/standalone_xml_history/snapshot` directory. # Logging messages from scripts Source: https://octopus.com/docs/deployments/custom-scripts/logging-messages-in-scripts.md When your scripts emit messages Octopus will display the messages in the Task Logs at the most appropriate level for the message. For example:
PowerShell ```powershell Write-Verbose "This will be logged as a Verbose message - verbose messages are hidden by default" Write-Host "This will be logged as Information" Write-Output "This will be logged as Information too!" Write-Highlight "This is a highlight. You can also include markdown to include hyperlinks to [your favorite CD tool website](https://octopus.com)." Write-Wait "Deployment is waiting on something" Write-Warning "This will be logged as a Warning" Write-Error "This will be logged as an Error and may cause your script to stop running - take a look at the section on Error Handling" ```
C# ```csharp Console.WriteLine("This will be logged as Information"); Console.Out.WriteLine("This will be logged as Information too!"); Console.Error.WriteLine("This will be logged as an Error."); WriteVerbose("Verbose!!!"); WriteHighlight("This is a highlight"); WriteWait("Deployment is waiting on something"); WriteWarning("Warning"); ```
Bash ```bash echo "This will be logged as Information" write_verbose "Verbose!!" write_highlight "This is a highlight. You can also include markdown to include hyperlinks to [your favorite continuous delivery tool](https://octopus.com)." write_wait "Deployment is waiting on something" write_warning "Warning" >&2 echo "This will be logged as an Error" echoerror() { echo "$@" 1>&2; } echoerror "You can even define your own function to echo an error!" ```
F# ```fsharp printfn "This will be logged as Information" writeVerbose "Verbose!!" writeHighlight "This is a highlight" writeWait "Deployment is waiting on something" writeWarning "Warning" eprintfn "This will be logged as Error" ```
Python3 ```python print("This will be logged as Information") printverbose("Verbose!") printhighlight("This is a highlight") printwait("Deployment is waiting on something") printwarning("Warning") print("This will be logged as an error", file=sys.stderr) ```
Try these out for yourself using the [Script Console](/docs/administration/managing-infrastructure/script-console)! ## Highlight log level Highlight messages will be show in bold and blue in the task log. They will also appear under the step heading on the Task Summary tab. You can use the highlight level to call out important information such as which upgrade scripts were run, or the exact time a web server go added back into the load balancer pool. They can also use markdown to expose hyperlinks in your logs. For example, consider the following PowerShell code: ```powershell $URI = "https://$($OctopusParameters['Octopus.Machine.Name']).uksouth.cloudapp.azure.com" Write-Highlight "Web Application Deployed!" Write-Highlight "Click [here]($URI) to open the newly deployed site." ``` When evaluated, the deployment task summary will show the hyperlink that can be clicked on: :::figure ![The Octopus Deployment Task Summary showing an example of a highlighted log level message](/docs/img/deployments/custom-scripts/images/write-highlight-in-task-summary.png) ::: ## Wait log level Wait log messages will be show in a different color in the log. Their primary use is to show when the deployment is waiting for something to occur (eg acquire a lock). We intend to use this message in the future to show a visual representation of your deployment progress. You can log your own wait messages, to indicate the deployment is paused in preparation for this. A wait is considered over when another log message of a different level is written. ## Progress log level Progress messages will display and update a progress bar on your deployment tasks while they are running, on the Task Log tab. You can provide the percentage complete and an optional message to display with the progress bar.
PowerShell ```ps PowerShell Update-Progress 10 Update-Progress 50 "We're halfway there!" ```
C# ```csharp UpdateProgress(10); UpdateProgress(50, "We're halfway there!"); ```
Bash ```bash update_progress 10 update_progress 50 "We're halfway there!" ```
F# ```fsharp Octopus.updateProgress 10 Octopus.updateProgress 50 "We're halfway there!" ```
Python3 ```python updateprogress(10) updateprogress(50, 'We\'re halfway there!') ```
Sometimes you might want to display progress from an external application that's called by Octopus during a deployment or runbook. This can be achieved using the `##octopus[progress]` service message written directly to standard output:
C# ```csharp private static string EncodeServiceMessageValue(string value) { var valueBytes = System.Text.Encoding.UTF8.GetBytes(value); return Convert.ToBase64String(valueBytes); } Console.WriteLine("##octopus[progress percentage='{0}' message='{1}']", EncodeServiceMessageValue(percentage.ToString()), EncodeServiceMessageValue(message)); ```
F# ```fsharp let private encode (value:string) = System.Text.Encoding.UTF8.GetBytes(value) |> Convert.ToBase64String let private writeServiceMessage name content = printfn "##octopus[%s %s]" name content let updateProgress (percentage: int) message = let encodedMessage = message |> encode let encodedPercentage = percentage.ToString() |> encode let content = sprintf "percentage='%s' message='%s'" encodedPercentage encodedMessage writeServiceMessage "progress" content ```
Python3 ```python def encode(value): return base64.b64encode(value.encode('utf-8')).decode('utf-8') def updateprogress(progress, message=None): encodedProgress = encode(str(progress)) encodedMessage = encode(message) print("##octopus[progress percentage='{0}' message='{1}']".format(encodedProgress, encodedMessage)) ```
Bash ```bash function encode_service_message_value { echo -n "$1" | openssl enc -base64 -A } echo "##octopus[progress percentage='$(encode_service_message_value "$1")' message='$(encode_service_message_value "$2")']" ```
## Service message The following service messages can be written directly to standard output which will be parsed by the server and the subsequent log lines written to standard output will be treated with the relevant log level. ``` ##octopus[stdout-ignore] ##octopus[stdout-error] ##octopus[stdout-warning] ##octopus[stdout-verbose] ##octopus[stdout-wait] ##octopus[stdout-highlight] ``` To return to the default standard output log level, write the following message: ``` ##octopus[stdout-default] ``` The following service messages can be written directly to standard output which will be parsed by the server and the subsequent log lines written to standard error will be treated with the relevant log level. ``` ##octopus[stderr-ignore] ##octopus[stderr-error] ##octopus[stderr-progress] ##octopus[stderr-output] ``` - `stderr-progress` will cause error log lines to be written as `verbose` log lines. - `stderr-output` will cause error log lines to be written as `info` log lines (standard output). Requires version `2025.3`. To return to the default standard error log level, write the following message: ``` ##octopus[stderr-default] ``` # .NET deployments Source: https://octopus.com/docs/deployments/dotnet.md .NET is Microsoft's popular, free, cross-platform, open source developer platform. With it, you can build a variety of applications. It comes in three flavors: - [.NET](https://dotnet.microsoft.com/learn/dotnet/what-is-dotnet) (also known as .NET Core) for when you want your applications to be cross-platform. Suitable for websites, servers and command-line applications to run on Linux, Windows and macOS. - [.NET Framework](https://dotnet.microsoft.com/learn/dotnet/what-is-dotnet-framework) for websites, services, desktop applications targeting the Windows OS. - [Xamarin](https://dotnet.microsoft.com/learn/xamarin/what-is-xamarin) for mobile apps. Octopus Deploy can help you perform repeatable, reliable deployments of your .NET applications, whichever implementation you use. ## Learn more This section provides deployment examples for .NET applications. # Canary deployments Source: https://octopus.com/docs/deployments/patterns/canary-deployments-with-octopus.md There are two ways to implement [canary deployments](https://octopus.com/devops/software-deployments/canary-deployment/) in Octopus. The first, and simplest, is to use the "Deploy to a subset of deployment targets" feature when deploying the release. This allows you to limit which deployment targets to deploy to. First, you would deploy using just the canary servers, then after testing, you can deploy again using the remaining servers. This approach works well if you have a small number of servers and don't deploy to production too frequently. The alternative approach is to build canary deployments into your deployment process. 1. Deploy the package to the canary server (one or more deployment targets may be associated with the *canary* [target tag](/docs/infrastructure/deployment-targets/target-tags)). 2. Have a [manual intervention](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) step to wait until we are satisfied. 3. Deploy the package to the remaining deployment targets (the *web-server* target tag). Note that the first two steps have been configured to only run for production deployments - in our pre-production environments, we can just deploy to all targets immediately. If we were performing fully automated tests, we could use a [PowerShell script step](/docs/deployments/custom-scripts) to invoke them rather than the manual intervention step. A final variation is to set up a dedicated "Canary" environment to deploy to. The environment can contain a canary deployment target, with the same deployment target also belonging to the production environment. :::div{.hint} **Canary users** Another variation of the canary deployment is to deploy the new version to all servers, but to selectively show the features to users, slowly increasing the number of users who experience the new features. Implementing such a system usually involves [feature toggles](http://martinfowler.com/bliki/FeatureToggle.html) and designing your application to work this way; it's really outside of the scope of a tool like Octopus. ::: ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Terraform output variables Source: https://octopus.com/docs/deployments/terraform/terraform-output-variables.md Terraform supports [output values](https://www.terraform.io/language/values/outputs), which are useful for providing relevant information about your infrastructure configuration both to other Terraform resources as well as administrators. Octopus supports these output values natively through output variables in your deployment process. ## Output variable handling Terraform's output variables are captured as Octopus variables after a template is applied. Each output variable is captured in two different formats: the JSON representation of the variable, and the value only of the variable. The JSON representation of the output variable is the result of calling `terraform output -json variablename`. For example, the JSON representation of a string output variable (which would appear in the logs as a message similar to `Saving variable "Octopus.Action[Apply Template].Output.TerraformJsonOutputs[test]" with the JSON value of "test"`) would look similar to this: ```json { "sensitive": false, "type": "string", "value": "hi there" } ``` While the value only output (which would appear in the logs as a message similar to `Saving variable "Octopus.Action[Apply Template].Output.TerraformValueOutputs[test]" with the value only of "test"`) would look similar to this: ``` "hi there" ``` ## Accessing Terraform output variables Using the previous example output variable called `test` you can access the output using PowerShell as follows: ```ps $value = $OctopusParameters["Octopus.Action[Apply Template].Output.TerraformValueOutputs[test]"] // OR $value = $OctopusParameters["Octopus.Action[Apply Template].Output.TerraformJsonOutputs[test]"] | ConvertFrom-Json | select -ExpandProperty value ``` The syntax for accessing JSON variables as covered by our [documentation here](/docs/projects/variables/variable-substitutions/#json-parsing) applies to both `TerraformJsonOutputs` as well as `TerraformValueOutputs`. However, the latter is less useful as it can also be a primitive value. In this case Octostache won't know that it should deserialize the value and will provide you with a JSON encoded result. It is therefore recommended to prefer `TerraformJsonOutputs` where possible. The following syntax can be used to access the value using the binding syntax: ``` #{Octopus.Action[Apply Template].Output.TerraformJsonOutputs[test].value} ``` # Define and use variables Source: https://octopus.com/docs/getting-started/first-deployment/define-and-use-variables.md Octopus lets you define variables and scope them for use in different phases of your deployments. Variables allow you to have a consistent deployment process across your infrastructure without having to hard-code or manually update configuration settings that differ across environments, deployment targets, channels, or tenants. ## Add a variable 1. From the *Hello world* project you created earlier, click **Project Variables** in the left menu. 2. Click **Create Variables**. 3. Add `Helloworld.Greeting` in the **Name** column, 4. Add `Hello, Development` in the **Value** column, 5. Click the **Scope** column and select the `Development` environment. 6. Click **Add another value**. 7. Add `Hello, Staging` and scope it to the `Staging` environment. 8. Click **Add another value**. 9. Add `Hello, Production` and scope it to the `Production` environment. 10. Click **Save**. :::figure ![The hello world variables](/docs/img/getting-started/first-deployment/images/project-variables.png) ::: ## Update deployment process Steps in the deployment process can reference variables. 1. Click **Process** in the left menu. 2. Select the previously created **Run a Script** step. ### Inline Source Code 3. Based on your selected language, copy the appropriate script from below. 4. Replace the script in the code editor with the new script.
PowerShell ```powershell Write-Host $OctopusParameters["Helloworld.Greeting"] ```
Bash ```bash greeting=$(get_octopusvariable "Helloworld.Greeting") echo $greeting ```
:::div{.hint} If you are using Octopus Cloud, Bash scripts require you to select the **Hosted Ubuntu** worker pool. The **Default Worker Pool** is running Windows and doesn't have Bash installed. ::: 5. Click **Save** 6. Click **Create Release**. :::div{.hint} A release snapshots everything about your project, including variables and the deployment process. You have to create a new release to see any changes. ::: As you promote through the environments, you will see the greeting change. :::figure ![The results of the hello world deployment with variables](/docs/img/getting-started/first-deployment/images/environment-variables.png) ::: Great job! Next, let's build on your deployment process and [add an approval process using manual interventions](/docs/getting-started/first-deployment/approvals-with-manual-interventions). ### All guides in this tutorial series 1. [First deployment](/docs/getting-started/first-deployment) 2. Define and use variables (this page) 3. [Approvals with manual interventions](/docs/getting-started/first-deployment/approvals-with-manual-interventions) 4. [Add deployment targets](/docs/getting-started/first-deployment/add-deployment-targets) 5. [Deploy a sample package](/docs/getting-started/first-deployment/deploy-a-package) ### Further reading for variables - [Variables](/docs/projects/variables) - [Deployments](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # Running a Runbook Source: https://octopus.com/docs/getting-started/first-runbook-run/running-a-runbook.md Unlike a deployment with a pre-defined lifecycle, Runbooks can run on any environment in any order. Runbooks are designed to automate routine maintenance tasks. Maintenance tasks might need to run on **Test** and **Production** but not on **Development** environments. 1. From the *Hello Runbook* runbook you created on the previous page, click **RUN...**. This screen provides the details of the Runbook you are about to run. :::figure ![run runbook basic options](/docs/img/getting-started/first-runbook-run/images/run-runbook-basic-options.png) ::: 1. Select an environment. 1. Click **RUN**. :::figure ![run runbook results](/docs/img/getting-started/first-runbook-run/images/run-hello-runbook-results.png) ::: Because we didn't define any deployment targets for the target environment, Octopus ran the script directly on the Octopus Server. If you are on Octopus Cloud, Octopus Deploy leased a [dynamic worker](/docs/infrastructure/workers/dynamic-worker-pools/#on-demand) (a machine that executes tasks on behalf of the Octopus Server) that was then used to execute the hello world script. The next step will cover [how to configure and use variables in runbooks](/docs/getting-started/first-runbook-run/runbook-specific-variables). ## Further reading For further reading on running a Runbook please see: - [Running a runbook](/docs/runbooks/running-a-runbook) - [Runbook vs Deployments](/docs/runbooks/runbooks-vs-deployments) - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # Samples Source: https://octopus.com/docs/getting-started/samples-instance.md Our [samples instance](https://samples.octopus.app) contains real-world deployment and runbook examples. Each one highlights one or more available Octopus features, from deploying Java applications to upgrading a Helm chart in a Kubernetes cluster. This page acts as a directory of features found in our samples instance. :::div{.hint} We're constantly adding to our samples instance. If you'd like to explore any of our samples further, just go to [https://samples.octopus.app](https://samples.octopus.app) and log in as a guest. ::: ## Deployment features \{#deployment-features} This section contains Octopus features that are found in Project [deployment processes](/docs/projects/deployment-process) in the samples instance. Each feature list is categorized by the Octopus Space and Project in which it can be found. ### AWS \{#deployments-aws} Explore examples of Octopus Deploy's [AWS integration](/docs/deployments/aws), including EC2, AWS RDS database, AWS CLI, Lambda and ECS deployments. **Pattern - Blue-Green** - Random Quotes Java: Deploys the Java version of Random Quotes to Tomcat using the Blue/Green environment pattern. [Build definition](https://bamboo.octopussamples.com/browse/RAN-JAVA) - Random Quotes .NET: Deploys the .NET version of Random Quotes using the Blue/Green environment pattern. [Build definition](https://bamboo.octopussamples.com/browse/RAN-NET) **Secrets Management** - AWS Secrets Manager: Sample project retrieving secrets from AWS Secrets Manager using the [Retrieve Secrets](https://library.octopus.com/step-templates/5d5bd3ae-09a0-41ac-9a45-42a96ee6206a/actiontemplate-aws-secrets-manager-retrieve-secrets) step template. **Target - Containers** - AWS ECS: Deploys the OctoPetShop application to AWS ECS Fargate using AWS ECR. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/OctoPetShop_OctoPetShopDockerEcr) **Target - MariaDB** - Grate - AWS RDS: Example project for automated database deployments using RoundhousE against an AWS RDS MariaDB instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildRoundhouse) RoundhousE has been deprecated and no longer compatible with the latest version of MariaDB. - DBUp - AWS RDS: Example project for automated database deployments using Dbup against an AWS RDS MariaDB instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildDBUp#all-projects) At this time, DBuP is not compatible with the latest version of MariaDB. - Flyway - AWS RDS: Example project for automated database deployments using Flyway against an AWS RDS MariaDB instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - RoundhousE - AWS RDS: Example project for automated database deployments using RoundhousE against an AWS RDS MariaDB instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildRoundhouse) RoundhousE has been deprecated and no longer compatible with the latest version of MariaDB. - Liquibase - AWS RDS: Sample project that creates and deploys the sakila database to an AWS RDS MariaDB instance using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) **Target - MySQL** - Grate - AWS RDS: Example project for automated database deployments using grate against an AWS RDS MySQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_Grate) - Dbup - AWS RDS: Example project for automated database deployments using Dbup against an AWS RDS MySQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Target_MySQL_AWS_Dbup) - Flyway - AWS RDS: Example project for automated database deployments using Flyway against an AWS RDS MySQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - RoundhousE - AWS RDS: Example project for automated database deployments using RoundhousE against an AWS RDS MySQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildRoundhouse) - Liquibase - AWS RDS: Sample project that creates and deploys the sakila database to an AWS RDS MySQL instance using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) **Target - Oracle** - Flyway RDS: Sample project that deploys the sakila database to an AWS RDS Oracle instance using Flyway. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - Liquibase RDS: Sample project that deploys the sakila database to an AWS RDS Oracle instance using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - DBUp RDS: Sample project that creates and deploys the Sakila database to an Oracle database server in AWS using DBUp. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildDBUp) **Target - PostgreSQL** - Grate - AWS RDS - DBUp - AWS RDS: Example project for automated database deployments using Dbup against an AWS RDS PostgreSQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildDBUp) - Flyway - AWS RDS: Example project for automated database deployments using Flyway against an AWS RDS PostgreSQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - RoundhousE - AWS RDS: Example project for automated database deployments using RoundhousE against an AWS RDS PosgreSQL instance. [Build definition](https://bitbucket.org/octopussamples/sakila/src/posgres/) - Liquibase - AWS RDS: Sample project that creates and deploys the sakila database to Postgres using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) **Target - Serverless** - AWS SAM: AWS SAM deployment based on the [blog post](https://octopus.com/blog/aws-sam-and-octopus) using code in the [AWSSamExample](https://github.com/OctopusSamples/AWSSamExample) repo. - AWS OctoSubscriber: A Lambda Function that accepts and processes Octopus Deploy Subscription Guided Failure or Manual Intervention Events. [Build definition](https://github.com/OctopusSamples/OctoSubscriber/blob/main/.github/workflows/AWSLambdas.yml) - AWS Subscriber S3: A Lambda Function that accepts and processes Octopus Deploy Subscription Events - Sample AWS Lambda **Target - SQL Server** - Redgate - Feature Branch Example: Sample project for deploying a database using Redgate SQL Change Automation with feature branches **Target - Tomcat** - Pet Clinic AWS: Deploy Java PetClinic to AWS Linux [Build definition](https://dev.azure.com/octopussamples/PetClinic/_build?definitionId=25) ### Azure \{#deployments-azure} Explore ways to use Octopus Deploy's built-in [Azure](/docs/deployments/azure) steps, including Azure WebApp, Azure CLI, ARM template and Azure SQL deployments. **Octopus Admin** - AzFuncNotifySlack: Azure Function App that consumes Octopus subscription webhook and send message to Slack **Pattern - Canary** - Deploy Web App: A sample project showing canary deployments of RandomQuotes to Azure WebApps. **Pattern - Monolith** - Pitstop: Sample project of a monolithic deployment process. - Pitstop - Customer Management: Project for the Customer Management API of the Pitstop application. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/PitStop_BuildDotnet) - Pitstop - Vehicle Management: Project for the Vehicle Management API for the Pitstop application. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/PitStop_BuildDotnet) - Pitstop - Workshop Management: Project for the Workshop Management API for the Pitstop application. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/PitStop_BuildDotnet) - Pitstop - Web: Project for the Web front-end of the Pitstop application. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/PitStop_BuildDotnet) **Pattern - Rolling** - Private Web App: Sample project for Azure Web App deployments using a [private endpoint](https://docs.microsoft.com/en-us/azure/app-service/networking/private-endpoint). **Pattern - Tenants** - Vet Clinic: A project that deploys the VetClinic application for [multiple customers modeled as tenants](https://octopus.com/docs/tenants/guides/multi-tenant-saas-application). - Car Rental: A sample car rental application using PHP, Linux, and MySQL. [Build definition](https://jenkins.octopussamples.com/job/CarRental/) - OctoPetShop: A project that deploys the OctoPetShop application for [different teams modeled as tenants](https://octopus.com/docs/tenants/guides/multi-tenant-teams). - OctoHR: A sample [version-controlled](https://octopus.com/docs/projects/version-control) project illustrating a single codebase (web app) with multiple separate customer (tenant) databases that users can access. Source code available on [GitHub](https://github.com/OctopusSamples/OctoHR). **Pattern-AutoScaling** - Azure VMSS Orchestration: A sample project showing how to use an orchestration project to deploy child applications with an Azure Virtual Machine scale set (VMSS). - App + VMSS: A sample project showing how to deploy an application when using an Azure Virtual Machine scale set (VMSS). **Secrets Management** - Azure Key Vault: Sample project retrieving secrets from Azure Key Vault using the [Retrieve Secrets](https://library.octopus.com/step-templates/6f59f8aa-b2db-4f7a-b02d-a72c13d386f0/actiontemplate-azure-key-vault-retrieve-secrets) step template. **Target - Hybrid** - Octo Pet Shop: Sample project that uses the Deploy to IIS step to deploy to both IIS on a VM and an Azure Web App. [Build definition](https://app.circleci.com/pipelines/github/OctopusSamples/OctoPetShop) **Target - MySQL** - Flyway - Azure PaaS: Demonstrates how to perform automated database updates using Flyway against MySQL. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - Liquibase - Azure PaaS: Sample project that creates and deploys the sakila database to a MySQL container hosted in Azure using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - Grate - Azure PaaS: Example project for automated database deployments using grate against an Azure MySQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_Grate) **Target - PaaS** - OctoPetShop: A .NET Core Sample application used by Octopus to sample deployments and Runbooks. This example deploys OctoPetShop to Azure PaaS. The Product, Shopping Cart and Web App are deployed to Azure Web Apps and the Database to an empty Azure SQL server - [Build](https://octopussamplesext.visualstudio.com/OctoPetShop/) **Target - PostgreSQL** - Flyway - Azure PaaS: Demonstrates how to perform automated database updates using Flyway against PostgreSQL. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - Liquibase - Azure PaaS: Sample project that creates and deploys the sakila database to a PosgreSQL container hosted in Azure using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - Grate - Azure PaaS **Target - Serverless** - Azure OctoSubscriber: Azure Functions that accept and process an Octopus Deploy Subscription and notify users via slack when a variable has changed. [Build definition](https://github.com/OctopusSamples/OctoSubscriber/blob/main/.github/workflows/AzureFunctions.yml) - Sample Azure Function: A sample project that deploys an Azure function using the `Deploy to Azure App Service` step. **Target - Tomcat** - Pet Clinic - Azure Web App: Deploy the Java PetClinic application to Tomcat hosted in an Azure Web App. - PetClinic - Demo: Deploy the Java PetClinic application to Tomcat hosted in an Azure Web App. ### Google Cloud \{#deployments-google-cloud} Find out more about the new Octopus dedicated [Google Cloud](/docs/deployments/google-cloud) support, including gcloud CLI, Google Container Registry (GCR), Terraform and Kubernetes deployments. **Pattern - Rolling** - PetClinic - rolling deploy: The PetClinic project converted to use the rolling deployments pattern **Secrets Management** - GCP Secret Manager: Sample project retrieving secrets from GCP Secrets Manager using the [Retrieve Secrets](https://library.octopus.com/step-templates/9f5a9e3c-76b1-462f-972a-ae91d5deaa05/actiontemplate-gcp-secret-manager-retrieve-secrets) step template. **Target - Containers** - Cloud Run - Hello: A simple deployment of a pre-built Google container [Hello World Service](https://cloud.google.com/run/docs/samples/cloudrun-helloworld-service) using the `gcloud` CLI. The code for the sample is available on [GitHub](https://github.com/GoogleCloudPlatform/cloud-run-hello/). **Target - MySQL** - Flyway - GCP Service Account: Example project for automated database deployments using Flyway against a GCP MySQL instance. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - Liquibase - GCP Service Account: Sample project that creates and deploys the sakila database to an AWS RDS MySQL instance using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - Grate - GCP Service Account **Target - PostgreSQL** - Liquibase - GCP Service Account: Sample project that creates and deploys the sakila database to a PosgreSQL instance hosted in GCP using Liquibase and using a GCP Service Account for authentication. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - Flyway - GCP Service Account: Demonstrates how to perform automated database updates using Flyway against PostgreSQL. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - Grate - GCP Service Account **Target - SQL Server** - Liquibase - GCP: Sample project that creates and deploys the Sakila database to Microsoft SQL Server using Liquibase. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildLiquibase) - Flyway - GCP: Sample project that creates and deploys the Sakila database to Microsoft SQL Server using Flyway. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildFlyway) - RoundhousE - GCP: Sample project that creates and deploys the Sakila database to Microsoft SQL Server using RoundhousE. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildRoundhouse) - DBUp - GCP: Sample project for deploying to SQL Server using DBUp by using the worker pool variable type. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_BuildDBUp) - Grate - GCP: Sample project that creates and deploys the Sakila database to Microsoft SQL Server using Grate. [Build definition](https://teamcity.octopussamples.com/buildConfiguration/Sakila_Grate) ### IIS \{#deployments-iis} Learn more about the [IIS](/docs/deployments/windows/iis-websites-and-application-pools) support that Octopus has to offer, including IIS deployments, and community step templates that allow fine-grain control of your IIS Websites and applications. **Pattern - Blue-Green** - Random Quotes .NET IIS: Deploys the .NET version of Random Quotes using the Blue/Green environment pattern to a single server with multiple applications. [Build definition](https://bamboo.octopussamples.com/browse/RAN-NET) - Random Quotes .NET: Deploys the .NET version of Random Quotes using the Blue/Green environment pattern. [Build definition](https://bamboo.octopussamples.com/browse/RAN-NET) **Pattern - IaC** - Random Quotes - Azure: Uses runbooks to create Azure virtual machines [using Terraform code](https://dev.azure.com/octopussamples/Terraform%20-%20RandomQuotes%20Azure/_git/Terraform%20-%20RandomQuotes%20Azure) for hosting an application and backing database, then deploys the Random Quotes database and application to those machines. - Random Quotes AWS: This is a sample project that showcases the use of Terraform with runbooks to create the infrastructure that is needed for a projects deployments. The runbooks in this project spin up two EC2 instances within AWS that are then used to deploy the Random Quotes web application and database to. **Pattern - Rollbacks** - 03 OctoFX - Complex Rollback: A sample .NET project showing deployments to Windows with a complex set of rollback functionality added. - 01 OctoFx - Original: A sample .NET project showing deployments to Windows *without* any rollback functionality. - 02 OctoFX - Simple Rollback: A sample .NET project showing deployments to Windows with a simple set of rollback functionality added. **Pattern-AutoScaling** - To Do Web Application: A sample child application used with the [Azure VMSS orchestration](https://samples.octopus.app/app#/Spaces-742/projects/azure-vmss-orchestration/deployments) project. - App + VMSS: A sample project showing how to deploy an application when using an Azure Virtual Machine scale set (VMSS). **Target - Hybrid** - Octo Pet Shop: Sample project that uses the Deploy to IIS step to deploy to both IIS on a VM and an Azure Web App. [Build definition](https://app.circleci.com/pipelines/github/OctopusSamples/OctoPetShop) **Target - Windows** - eShopOnWeb: Microsoft ASP.NET Core 5.0 Reference Application - OctoFX: Build Server: [Azure DevOps](https://dev.azure.com/octopussamples/octofx) Webinar: [YouTube](https://youtu.be/mLgeQRUlcl0) ### Java \{#deployments-java} Octopus has a range of [Java application](/docs/deployments/java) deployment examples, from deploying to Tomcat via Manager, deployments to Wildfly EAP and community step templates that offer first-class database deployment options like Flyway. **Pattern - Blue-Green** - Random Quotes Java: Deploys the Java version of Random Quotes to Tomcat using the Blue/Green environment pattern. [Build definition](https://bamboo.octopussamples.com/browse/RAN-JAVA) - Random Quotes - Tenanted: Deploys the Java version of Random Quotes to Tomcat using the Blue/Green tenant pattern. **Pattern - Rollbacks** - 01 PetClinic - Original: PetClinic Java Springboot application deploying to MySQL, Wildfly, and Tomcat - 02 PetClinic - SimpleRollback: PetClinic Java Springboot application deploying to MySQL, Wildfly, and Tomcat - 03 PetClinic - ComplexRollback: PetClinic Java Springboot application deploying to MySQL, Wildfly, and Tomcat **Pattern - Rolling** - PetClinic - no rolling deploy: A project showing a deployment process which doesn't use the rolling deployments pattern **Target - Tomcat** - Pet Clinic AWS: Deploy Java PetClinic to AWS Linux [Build definition](https://dev.azure.com/octopussamples/PetClinic/_build?definitionId=25) **Target - Wildfly** - PetClinic: A sample project deploying the Java PetClinic app to a Linux VM in Azure. ### Kubernetes \{#deployments-kubernetes} View practical [Kubernetes](/docs/deployments/kubernetes) examples, including deployment, service, ingress resources, and helm chart upgrades. **Pattern - Rollbacks** - 01 Kubernetes Original: A sample project showing deployments of PetClinic to Kubernetes *without* any rollback functionality. - 02 Kubernetes - Simple Rollback: A sample project showing deployments of PetClinic to Kubernetes with a simple set of rollback functionality added. - 03 Kubernetes - Complex Rollback: A sample project showing deployments of PetClinic to Kubernetes with a complex set of rollback functionality added. **Target - Kubernetes** - Rancher: A sample project that deploys the PetClinic application to a Rancher-managed Kubernetes cluster. - OctopusDeploy: Setup an AWS EKS Cluster and target within Octopus Resources: [AWS EKS Configuration](https://github.com/OctopusSamples/IaC/blob/master/aws/Kubernetes/cluster.yaml) [eksctrl](https://github.com/weaveworks/eksctl) - Multi-Cloud PetClinic: Setup AWS EKS, GCP GKE, and Azure AKS Clusters and targets within Octopus - Database: A sample project that deploys the database component of the Octo Pet Shop application to Kubernetes. - Product API: A sample project that deploys the Product API component of the Octo Pet Shop application to Kubernetes. - Shopping Cart API: A sample project that deploys the Shopping Cart API component of the Octo Pet Shop application to Kubernetes. - Web App: A sample project that deploys the main Web app component of the Octo Pet Shop application to Kubernetes. - Migrations: A sample project that deploys the database migrations of the Octo Pet Shop application to Kubernetes. - Octo Pet Shop - Raw YAML: A sample project that uses the Deploy RAW YAML step to deploy the OctoPetShop app to a Kubernetes cluster. - PetClinic Helm Chart: Sample showing how to deploy the PetClinic application using a Helm Chart. Source code located [here](https://dev.azure.com/octopussamples/_git/PetClinic?path=/petclinic-chart) - PetClinic: Setup an AWS EKS Cluster and target within Octopus Resources: [AWS EKS Configuration](https://github.com/OctopusSamples/IaC/blob/master/aws/Kubernetes/cluster.yaml) [eksctrl](https://github.com/weaveworks/eksctl) - nginx+httpd: A sample project showing deployment of nginx and httpd as described in [this blog post](https://octopus.com/blog/deploying-applications-to-kubernetes). ## Runbook features \{#runbook-features} This section contains features that are found in Octopus [runbooks](/docs/runbooks) in our samples instance. Each feature list is categorized by the Octopus Space, Project and runbook in which it can be found. ### AWS \{#runbooks-aws} Explore examples of Octopus Deploy's [AWS integration](/docs/deployments/aws), including EC2, AWS RDS database, AWS CLI, Lambda and ECS deployments. **Pattern - Blue-Green** - Random Quotes .NET - Change Production Group: Changes the blue-green designation. - Create Infrastructure: Creates environment-specific infrastructure. - Destroy Infrastructure: Destroys environment-specific infrastructure. - Destroy the Kraken: Destroys project infrastructure and calls Destroy Infrastructure for each environment. - Unleash the Kraken: Creates project infrastructure and calls Create Infrastructure for each Environment. - Random Quotes .NET IIS - Change Production Group: Changes the blue-green designation. - Destroy Infrastructure: Destroys environment-specific infrastructure. - Destroy the Kraken: Destroys project infrastructure and calls Destroy Infrastructure for each environment. - Unleash the Kraken: Creates project infrastructure and calls Create Infrastructure for each Environment. - Random Quotes Java - Change Production Group: Runbook that will switch traffic to one of the two load-balancer listener options for the Octopus Environment; _Blue_, or _Green_. - Create Infrastructure: Runbook that will spin up the _Random Quotes Java_ project infrastructure in AWS for an Octopus Environment. - Destroy Infrastructure: Runbook that will tear down the _Random Quotes Java_ project infrastructure in AWS for an Octopus Environment. - Destroy the Kraken: Runbook that will tear down **all** the _Random Quotes Java_ project infrastructure in AWS. - Unleash the Kraken: Runbook that will spin up **all** the required _Random Quotes Java_ project infrastructure in AWS. **Pattern - IaC** - Dynamic worker army - Create Infrastructure: Spins up the worker army **Pattern - Rolling** - AWS - Rolling Deploy - Spin up Environment Resources - Tear Down AWS Infrastructure **Pattern-AutoScaling** - AWS ASG - Scale Down ASG - Scale Up ASG **Secrets Management** - AWS Secrets Manager - Retrieve Secrets: This runbook retrieves secrets from AWS Secrets Manager and creates sensitive output variables to use in deployments and runbooks. **Target - Containers** - AWS ECS - Deregister task definitions: Removes the task definitions from ECS **Target - Kubernetes** - Multi-Cloud PetClinic - Create EKS Cluster: Create an Elastic Kubernetes Service cluster on AWS. - Destroy EKS Cluster: Destroy the AWS Elastic Kubernetes Service cluster. - OctopusDeploy - Create Cluster: Creating a two node Kubernetes cluster in AWS using [eksctl](https://github.com/weaveworks/eksctl). Rather than the default configuration eksctl will use [this](https://github.com/OctopusSamples/IaC/blob/master/aws/Kubernetes/cluster.yaml) cluster config. - Delete Cluster: Delete Kubernetes cluster and node groups from AWS - PetClinic - Create Cluster: Creating a two node Kubernetes cluster in AWS using [eksctl](https://github.com/weaveworks/eksctl). Rather than the default configuration eksctl will use [this](https://github.com/OctopusSamples/IaC/blob/master/aws/Kubernetes/cluster.yaml) cluster config. This creates a single cluster in the Production environment, and then copies it as a target to all environments - Delete Cluster: Delete Kubernetes cluster and node groups from AWS **Target - Oracle** - DBUp RDS - Temp - Flyway RDS - Test **Target - Serverless** - AWS OctoSubscriber - Delete S3 bucket - Get Canonical ID - Get role arn - Spin Up Subscriber Infrastructure - Tear Down AWS Subscriber Infrastructure - AWS Subscriber S3 - Get Canonical ID - Spin Up Subscriber Infrastructure **Target - SQL Server** - AWS Backup and Restore S3 - Backup Database - Restore Database - Redgate - Feature Branch Example - Create AWS Redgate Masked Database Backup: Runbook that will create a masked copy of production so developers and testing can use it. - Delete AWS Redgate Feature Branch Database: Runbook that will backup and then delete the feature branch database. - Delete AWS Redgate RDS Snapshots: Runbook to delete old rds snapshots, as the max allowed is 100 - Restore AWS Redgate Masked Backup for Feature Branches: Runbook that will create a feature branch database on test using the masked production database backup stored in s3 - Spin Up AWS Redgate SQL Server RDS Server: Runbook to spin up a database in AWS for the Redgate Sample - Tear Down AWS Redgate SQL Server RDS Server: Runbook to tear down the database in AWS for Redgate sample **Target - Tomcat** - Pet Clinic AWS - Create Infrastructure - Destroy Infrastructure ### Azure \{#runbooks-azure} Explore ways to use Octopus Deploy's built-in [Azure](/docs/deployments/azure) steps, including Azure WebApp, Azure CLI, ARM template and Azure SQL deployments. **Octopus Admin** - Artifactory Sample Management - Renew and Deploy SSL Certificate: Runbook which renews and stores LetsEncrypt certificates in the Octopus Certificate library and deploys to the target machine - Start Artifactory VM: Starts the Artifactory VM running in Azure - Stop Artifactory VM: Stops the Artifactory VM running in Azure - Azure VM management - Check for Premium LRS SSDs: This runbook uses the Azure CLI to check all virtual machines in a subscription for the presence of Premium_LRS on either the OS or Data disk - Lets Encrypt Certificates - Renew LetsEncrypt Certificates: Runbook which renews and stores LetsEncrypt certificates in the Octopus Certificate library - Provision SQL Server - Destroy Azure SQL IaaS - Destroy Azure SQL PaaS - Provision Azure SQL IaaS - Provision Azure SQL PaaS - Samples build servers - Renew Lets Encrypt certificates - Shutdown Samples build server VMs - Start Samples build server VMs - Windows Server Admin - Create hardened Windows Azure VM: - Create an Azure Windows virtual machine. - Configure that machine as an Octopus tentacle - Run hardening script (Windows Server 2016 Hardening runbook) - Tear down hardened Windows Azure VM: Remove the Deployment Target from Octopus and then tear down the Resource Group containing the VM from Azure. **Pattern - IaC** - PowerShell DSC IIS Server - Create Infrastructure: Creates infrastructure for the PowerShell DSC IIS Server project. - Destroy Infrastructure: Tears down infrastructure for the PowerShell DSC IIS Server project. - Random Quotes - Azure - Create and Configure Terraform Infrastructure: Creates necessary infrastructure in Azure [using Terraform](https://dev.azure.com/octopussamples/_git/Azure-Terraform-RandomQuotes) and configures it for application deployment. **Pattern - Rolling** - Azure VMSS - Spin Up Azure Resources - Tear Down Azure Resources - PetClinic Infrastructure - 3-Create GCP PetClinic Project Infrastructure: Runbook that will spin up the Rolling Deploy - Conversion projects infrastructure - 4-Destroy the GCP Kraken: Runbook that will tear down ALL the Rolling Deploy - Conversion projects GCP infrastructure, using execution containers for workers **Pattern - Tenants** - Car Rental - Create Azure Web Apps - Destroy Azure Web Apps - OctoHR - Create Infrastructure: Create environment-specific infrastructure. - Destroy Infrastructure: Destroy environment-specific infrastructure. - Test enable app_offline - OctoPetShop - Create Azure Web Apps - Destroy Azure Database - Destroy Azure Web Apps - Vet Clinic - Create Azure Web Apps - Destroy Azure Web Apps **Pattern-AutoScaling** - Azure VMSS Orchestration - Reconcile VMSS Provisioning: Runbook that will ensure Octopus Deploy and the VMSS VMs are in sync. - Scale Down VMSS: Step template to scale down the VMSS to be used on a schedule - Scale VMSS: Runbook to scale a VMSS - Spin Up VMSS: Runbook to create a new VMSS - Tear Down VMSS: Runbook to delete the VMSS **Secrets Management** - Azure Key Vault - Create Azure Key Vault: This runbook creates the infrastructure required for retrieving secrets from Azure Key Vault. - Retrieve Secrets: This runbook retrieves secrets from Azure Key Vault and creates sensitive output variables to use in deployments and runbooks. - HashiCorp Vault - Create HashiCorp Vault Server: This runbook creates the infrastructure required to retrieve secrets from a HashiCorp Vault server. **Target - Kubernetes** - Multi-Cloud PetClinic - Create AKS Cluster: Create an Azure Kubernetes Service cluster. - Destroy AKS Cluster: Destroy the Azure Kubernetes Service cluster and resource group. - Octo Pet Shop - Raw YAML - Create Octo Pet Shop Azure K8s Cluster: Runbook to Spin Up a K8s Cluster - Destroy Octo Pet Shop Azure K8s Cluster: Runbook to Destroy the K8s Cluster - PetClinic Helm Chart - Create Infrastructure: Runbook to Spin Up a K8s Cluster - Destroy Infrastructure: Runbook to Destroy the K8s Cluster **Target - Payara** - PetClinic - Create Infrastructure: Create project specific infrastructure. - Destroy Infrastructure: Destroys project specific infrastructure. **Target - Serverless** - Azure OctoSubscriber - Create Infrastructure: Creates infrastructure specific to this project. - Test **Target - SQL Server** - DACPAC - Azure SQL - Spin up OctoFX-DACPAC Azure database - Teardown OctoFX-DACPAC Azure database - DBUp - Azure SQL - Delete Azure SQL Server Database: Runbook to tear down the database in Azure - Export Azure SQL Server Database - Spin Up Azure SQL Server Database: Runbook to spin up a database in Azure - DBUp - GCP - Delete Azure SQL Server Database: Runbook to tear down the database in Azure - Export Azure SQL Server Database - Spin Up Azure SQL Server Database: Runbook to spin up a database in Azure - Flyway - Azure SQL - Destroy Infrastructure: Drops database for the environment. - Flyway - Azure SQL Execution Containers - Destroy Infrastructure: Drops database for the environment. - Grate - Azure SQL - Destroy Infrastructure: Destroys the RoundhousE database for the environment. - Liquibase - Azure SQL - Destroy Infrastructure: Destroys liquibase database for environment. - Redgate - Real World Example - Delete Redgate SQL Server Database: Runbook to tear down the database in Azure for Redgate sample - Delete Resource Group deployment: Deletes deployments to the resource group so we don't go over quota. - Spin Up Redgate SQL Server Database: Runbook to spin up a database in Azure for the Redgate Sample - RoundhousE - Azure SQL - Destroy Infrastructure: Destroys the RoundhousE database for the environment. **Target - Tomcat** - Pet Clinic - Azure Web App - Create Infrastructure: Creates project-specific infrastructure. - Destroy Infrastructure: Destroys project-specific infrastructure - PetClinic - Demo - Create Infrastructure: Creates project-specific infrastructure. - Destroy Infrastructure: Destroys project-specific infrastructure **Target - WebSphere** - PetClinic - Create Infrastructure: Creates project specific infrastructure. - Destroy Infrastructure: Destroys environment specific infrastructure. **Target - Windows** - OctoFX - Manual Failover to Primary: This is a Runbook designed to failover back from Disaster Recovery in UK South to Western Europe. Please check all resources are healthy before running this Runbook: * Must be run in the context of Production for URL testing to work successfully. - Monitor Primary Website & Failover to DR if offline: This is to be run in the context of Disaster Recovery so it can spin up the correct machines and then failover: * Checks Production URL * Starts DR SQL & Web * Switches DNS over. 60s TTL - Start Environment: Starts the Web and SQL Server for specified environments. This is the template for the scheduled triggers turning on Infrastructure - Stop Environment: Stops the Web and SQL Server for specified environments. This is the template for the scheduled triggers turning off Infrastructure ### Google Cloud \{#runbooks-google-cloud} Find out more about the new Octopus dedicated [Google Cloud](/docs/deployments/google-cloud) support, including gcloud CLI, Google Container Registry (GCR), Terraform and Kubernetes deployments. **Pattern - Rolling** - PetClinic Infrastructure - 1-Create GCP Ubuntu 20.04 Worker: Runbook that will spin up the Rolling Deploy - Conversion projects GCP worker. - 2-Configure GCP NLB Target Pools: Runbook that will configure a Network Load Balancer with target pools for the PetClinic project - 3-Create GCP PetClinic Project Infrastructure: Runbook that will spin up the Rolling Deploy - Conversion projects infrastructure - 4-Destroy the GCP Kraken: Runbook that will tear down ALL the Rolling Deploy - Conversion projects GCP infrastructure, using execution containers for workers - 5-Destroy GCP Ubuntu 20.04 Worker: Unregister the worker from Octopus and delete the machine. - Create GCP Cloud MySQL instance: A Runbook that will spin up a MySQL instance for use with PetClinic projects. This can be as a one-off. **Secrets Management** - GCP Secret Manager - Retrieve Secrets: This runbook retrieves secrets from Google Cloud Secret Manager and creates sensitive output variables to use in deployments and runbooks. **Target - Kubernetes** - Multi-Cloud PetClinic - Create GKE Cluster: Create a Google Kubernetes Engine cluster. - Destroy GKE Cluster: Destroy the Google Kubernetes Engine cluster. - Rancher - Create Infrastructure: Creates VM in GCP that runs Rancher. - Destroy Infrastructure: Tears down VM hosting Rancher and removes clusters from Deployment Targets. **Target - SQL Server** - Flyway - GCP - Destroy Infrastructure: Drops database for the environment. - Grate - GCP - Destroy Infrastructure: Destroys the RoundhousE database for the environment. - Liquibase - GCP - Destroy Infrastructure: Destroys liquibase database for environment. - RoundhousE - GCP - Destroy Infrastructure: Destroys the RoundhousE database for the environment. ### IIS \{#runbooks-iis} Learn more about the [IIS](/docs/deployments/windows/iis-websites-and-application-pools) support that Octopus has to offer, including IIS deployments, and community step templates that allow fine-grain control of your IIS Websites and applications. **Pattern - Blue-Green** - Random Quotes .NET - Create Infrastructure: Creates environment-specific infrastructure. **Target - Windows** - Computer Provisioning - Install Developer Machine Dependencies - OctoFX - Backup & Restore SQL Backup from Production to Environment: Back up Production Database and Restore to Environment picked during running. - If you select Production, it will overwrite Production (Only to be used for Restoring backups to Production) - If you select Test, it will restore to Test - If you select Development, it will restore to Development - If you select Disaster Recovery, it will restore to Disaster Recovery - Install Dependencies: Installs Dependencies for the application to run. It installs the following: - Chocolatey - Chocolatey VSCode Install - Chocolatey .NET 4.7.2 - Chocolatey .NET 4.8 - Installs IIS & Dependencies - Checks for Pending Restarts and Restarts if required. - Restart IIS AppPool: Restarts an IIS AppPool in required environment - Restart IIS Website: Runbook that restarts a named IIS Website - Start IIS AppPool: Starts IIS App Pool on Web Servers - Stop IIS AppPool: Stops an IIS AppPool in required environment ### Java \{#runbooks-java} Octopus has a range of [Java application](/docs/deployments/java) deployment examples, from deploying to Tomcat via Manager, deployments to Wildfly EAP and community step templates that offer first-class database deployment options like Flyway. **Pattern - Rollbacks** - 03 PetClinic - ComplexRollback - Undeploy Application from Tomcat: Undeploy an application from a Tomcat server. ### Kubernetes \{#runbooks-kubernetes} View practical [Kubernetes](/docs/deployments/kubernetes) examples, including deployment, service, ingress resources, and helm chart upgrades. **Pattern - Rollbacks** - 03 PetClinic - ComplexRollback - Undeploy Application from Tomcat: Undeploy an application from a Tomcat server. ### Terraform \{#runbooks-terraform} See how to use Octopus built-in [Terraform](/docs/deployments/terraform) steps to manage your infrastructure and resources in a convention-based, templated way. Our samples instance includes terraform `plan`, `apply` and `destroy` terraform steps. **Pattern - IaC** - Dynamic worker army - Create Infrastructure: Spins up the worker army - Destroy Infrastructure: Tears down the worker army. - Octopus Deploy Terraform Provider - Configure Existing Octopus Deploy Instance Using the Octopus Terraform Provider: This Runbook will configure basic settings within an existing Octopus Deploy instance using data resources for the Octopus Deploy Terraform provider. These include `octopusdeploy_project_groups`, `octopusdeploy_environments`, `octopusdeploy_lifecycles` and `octopusdeploy_teams`. This will also create the following resources: - Project With A Deployment Process - Tenant - Tenant Tag Set - Tenant Project Variable - Configure New (Blank) Octopus Deploy Instance Using The Octopus Terraform Provider: This Runbook will configure basic settings within a new (blank) Octopus Deploy instance. The Octopus Terraform Provider will set up the following: - Octopus Project Group - Project examples with variables and deployment processes for Tenanted and Untenanted deployments. - Deployment Environments. - Worker Pools - Tenants - Deployment Lifecycle - Tenant Tag Set - Octopus Teams - Random Quotes - Azure - Create and Configure Terraform Infrastructure: Creates necessary infrastructure in Azure [using Terraform](https://dev.azure.com/octopussamples/_git/Azure-Terraform-RandomQuotes) and configures it for application deployment. - Destroy Terraform Resources and Delete Deployment Targets: Destroys created Terraform resources and removes registered deployment targets. - Random Quotes AWS - Create Infrastructure: Creates EC2 instances using Terraform, registers them as deployment targets with Octopus, and then provisions them with the necessary tooling for application deployment. - Destroy Infrastructure: Destroys created EC2 instances and all supporting resources created through Terraform along with deregistering them as targets within Octopus. **Target - Serverless** - AWS OctoSubscriber - Spin Up Subscriber Infrastructure - Tear Down AWS Subscriber Infrastructure - AWS Subscriber S3 - Spin Up Subscriber Infrastructure - Tear Down AWS Subscriber Infrastructure **Target - Windows** - eShopOnWeb - Create Infrastructure: Stands up an Azure VM with IIS and SQL Server Express using Terraform for eShopOnWeb to be deployed on to. - Destroy Infrastructure: Destroys the Azure VM and infrastructure created for this project. # Token account Source: https://octopus.com/docs/infrastructure/accounts/tokens.md Tokens can be added to Octopus as accounts. This is useful, for instance, if you are deploying to [Kubernetes Targets](/docs/kubernetes/targets/kubernetes-api). ## Add a token to Octopus 1. Navigate to **Deploy ➜ Manage ➜ Accounts** and click **ADD ACCOUNT**. 1. Select **Token** from the drop-down menu. 1. Give the account a meaningful name. 1. Add a description. 1. Enter the token into the **Token** field. 1. If you want to restrict which environments can use the account, select the environments that are allowed to use the account. If you select no environments, all environments will be allowed to use the account. # New Octopus Target Command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/new-octopustarget.md :::div{.warning} **Deprecated** Creating deployment targets using the `New-OctopusTarget` function has been deprecated in favor of using [Cloud Target Discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery). ::: In **Octopus 2021.3**, a new architecture for deployments and steps targets was developed, known as **step packages**. Targets defined by step packages can be created either by PowerShell or bash functions available in any Octopus script-running context. Not all targets are defined by step packages. The complete list of targets defined by step packages is available below. To create a target defined by a step package, you will need to know the `target identifier`, and the `inputs` required by the target. These can currently be found in the following locations: | Target | Identifier | Required Inputs | | --------------- | -------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | AWS ECS Cluster | [Identifier](https://github.com/OctopusDeploy/step-package-ecs/blob/main/targets/ecs-target-v2/src/metadata.json#L5) | [Inputs](https://github.com/OctopusDeploy/step-package-ecs/blob/main/targets/ecs-target-v2/src/inputs.ts) | ## New octopus target Command (pwsh): **New-OctopusTarget** | Parameter | Value | | --------------------- | ----------------------------------------------------------------------------- | | `-name` | The Name of the target to create | | `-targetId` | The target identifier of target to create | | `-inputs` | The inputs required to define the target being created | | `-roles` | Comma separated list of [target tags](/docs/infrastructure/deployment-targets/target-tags) to assign | | `-updateIfExisting` | Will update an existing target with the same name, create if it doesn't exist | | `-workerPoolIdOrName` | Name or Id of the Worker Pool for the deployment target to use. (Optional) | Command (bash) **new_octopustarget** | Parameter | Value | | ---------------------- | ----------------------------------------------------------------------------- | | `-n` \| `--name` | The Name of the target to create | | `-t` \| `--targetId` | The target identifier of target to create | | `--inputs` | The inputs required to define the target being created | | `--roles` | Comma separated list of [target tags](/docs/infrastructure/deployment-targets/target-tags) to assign | | `--update-if-existing` | Will update an existing target with the same name, create if it doesn't exist | | `--worker-pool` | Name or Id of the Worker Pool for the deployment target to use. (Optional) | ### Examples The below examples demonstrate creating a new AWS ECS Cluster target, evidenced by the `aws-ecs-target` target identifier. These scripts would typically be invoked after creating the cluster in a preceding step. The required information can be passed to these scripts via [passing parameters](/docs/deployments/custom-scripts/passing-parameters-to-scripts/), or via [output variables](/docs/deployments/custom-scripts/output-variables) published in preceding steps, or can simply be hard-coded. #### Account credentials Below is an example of creating an AWS ECS Cluster target with [account credentials](/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target/#aws-account).
PowerShell ```powershell PowerShell $inputs = @" { "clusterName": "$($OctopusParameters["clusterName"])", "region": "$($OctopusParameters["region"])", "authentication": { "credentials": { "type": "account", "account": "$($OctopusParameters["awsAccount"])", }, "role": { "type": "noAssumedRole" } } } "@ New-OctopusTarget -Name "$($OctopusParameters["target_name"])" -TargetId "aws-ecs-target" -Inputs $inputs -Roles "$($OctopusParameters["role"])" ```
Bash ```bash read -r -d '' INPUTS < #### Worker credentials Below is an example of creating an AWS ECS Cluster target with [worker credentials](/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target/#worker-credentials).
PowerShell ```powershell $inputs = @" { "clusterName": "$($OctopusParameters["clusterName"])", "region": "$($OctopusParameters["region"])", "authentication": { "credentials": { "type": "worker" }, "role": { "type": "noAssumedRole" } } } "@ New-OctopusTarget -Name "$($OctopusParameters["target_name"])" -TargetId "aws-ecs-target" -Inputs $inputs -Roles "$($OctopusParameters["role"])" ```
Bash ```bash Bash read -r -d '' INPUTS < #### Assuming an IAM role Below is an example of creating an AWS ECS Cluster target using an [assumed role](/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target/#assuming-an-iam-role). Assuming a role can be used with either worker or account credentials, the example below uses worker credentials.
PowerShell ```powershell $inputs = @" { "clusterName": "$($OctopusParameters["clusterName"])", "region": "$($OctopusParameters["region"])", "authentication": { "credentials": { "type": "worker" }, "role": { "type": "assumeRole", "arn": "$($OctopusParameters["assumeRoleArn"])", // Required "sessionName": "$($OctopusParameters["assumeRoleSessionName"])", // Optional "sessionDuration": $($OctopusParameters["assumeRoleSessionDuration"]), // Optional "externalId": "$($OctopusParameters["assumeRoleExternalId"])", // Optional } } } "@ New-OctopusTarget -Name "$($OctopusParameters["target_name"])" -TargetId "aws-ecs-target" -Inputs $inputs -Roles "$($OctopusParameters["role"])" ```
Bash ```bash read -r -d '' INPUTS < :::div{.hint} If your process creates dynamic deployment targets from a script, and then deploys to those targets in a subsequent step, make sure you add a full [health check](/docs/projects/built-in-step-templates/health-check) step for the role of the newly created targets after the step that creates and registers the targets. This allows Octopus to ensure the new targets are ready for deployment by staging packages required by subsequent steps that perform the deployment. ::: # Polling Tentacles over Standard HTTPS Port Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443.md [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) usually communicate with Octopus Server over TCP port 10943. If your network configuration prevents outbound connections from your Tentacles on non-standard ports, you can configure Tentacle to use port 443 (HTTPS). :::div{.hint} **Please Note:** - You must be running **Tentacle 6.3.417** (or higher) to configure polling tentacles over port 443. - Configuring polling tentacles over port 443 does **not use WebSockets**. For more information, see [Polling Tentacles over WebSockets](/docs/infrastructure/deployment-targets/tentacle/windows/polling-tentacles-web-sockets). ::: The procedure for configuring Polling Tentacles to use port 443 varies based upon your chosen method of hosting Octopus Server. :::hint It may be helpful to know that the Tentacle agent supports command-line operations. If you're using a Polling Tentacle and need to configure a Polling Proxy, you can refer to our documentation on [Polling Proxy](/docs/octopus-rest-api/tentacle.exe-command-line//polling-proxy) for more details. ::: ## Octopus Cloud The setup of a Polling Tentacle for an [Octopus Cloud](/docs/octopus-cloud) instance over port 443 is the same as a [Polling Tentacle over port 10943](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles), except when registering the Tentacle. Change the `register-with` and `register-worker` commands: - Omit the `--server-comms-port` parameter. - Specify the `--server-comms-address
` parameter. - The address to use is your [Octopus Cloud](/docs/octopus-cloud) instance URL prefixed with `polling.` (e.g. `https://polling..octopus.app`). ### Registering a new Tentacle ```powershell .\Tentacle register-with --instance MyInstance --server "https://.octopus.app" --server-comms-address "https://polling..octopus.app" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --environment "Test" --role "Web" ``` ### Changing an existing Tentacle ```powershell .\Tentacle service --instance MyInstance --stop .\Tentacle configure --reset-trust .\Tentacle register-with --instance MyInstance --server "https://.octopus.app" --server-comms-address "https://polling..octopus.app" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --environment "Test" --role "Web" .\Tentacle service --instance MyInstance --start ``` ### Registering a new Worker ```powershell .\Tentacle register-worker --instance MyInstance --server "https://.octopus.app" --server-comms-address "https://polling..octopus.app" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --workerpool MyWorkerPool ``` ### Changing an existing Worker ```powershell .\Tentacle service --instance MyInstance --stop .\Tentacle configure --reset-trust .\Tentacle register-worker --instance MyInstance --server "https://.octopus.app" --server-comms-address "https://polling..octopus.app" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --workerpool MyWorkerPool .\Tentacle service --instance MyInstance --start ``` ## Self-hosted For self-hosted installations of Octopus Server, you will require specific network configuration and/or services to support the use of Polling Tentacles over port 443. This may be in addition to any existing configuration to support making the Octopus Web Portal and REST API available over port 443. A reverse proxy specific to Polling Tentacles (e.g. NGINX) could be set up on the Octopus Server or a machine/appliance that fronts it. The reverse proxy would inspect connections coming in on the desired port (in this case 443) and forward to the configured port in Octopus Server for Polling Tentacles (defaults to 10943). :::div{.hint} During registration with Octopus Server as a [Worker](/docs/infrastructure/workers) or [Deployment Target](/docs/infrastructure/deployment-targets), Polling Tentacles use HTTPS to communicate with the Octopus REST API. Once a Tentacle is registered with Octopus Server, it uses a [secure TCP connection](/docs/security/octopus-tentacle-communication) to communicate with Octopus Server, and doesn't make HTTP calls. This means that you will need to use a TCP reverse proxy for Polling Tentacles as opposed to a HTTP reverse proxy for the Octopus Web Portal and REST API and may require using an additional machine to achieve. For example when using NGINX you should use a [stream](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) reverse proxy and not a [http](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/) reverse proxy for the polling connection. ::: For example: - Configure a new DNS record dedicated for Polling Tentacle traffic. - This will be used when registering your Workers and Tentacles (i.e. `--server-comms-address https://`) - Configure a reverse proxy rule to redirect inbound traffic on port 443 on the new DNS record to port 10943 on your Octopus Server. The setup of a Polling Tentacle for your self-hosted instance over port 443 is the same as a [Polling Tentacle over port 10943](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles), except when registering the Tentacle. Change the `register-with` and `register-worker` commands: - Omit the `--server-comms-port` parameter. - Specify the `--server-comms-address
` parameter. - The address to use is your new DNS record (e.g. `https:///`). ### Registering a new Tentacle ```powershell .\Tentacle register-with --instance MyInstance --server "https://" --server-comms-address "https://" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --environment "Test" --role "Web" ``` ### Changing an existing Tentacle ```powershell .\Tentacle service --instance MyInstance --stop .\Tentacle configure --reset-trust .\Tentacle register-with --instance MyInstance --server "https://" --server-comms-address "https://" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --environment "Test" --role "Web" .\Tentacle service --instance MyInstance --start ``` ### Registering a new Worker ```powershell .\Tentacle register-worker --instance MyInstance --server "https://" --server-comms-address "https://" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --workerpool MyWorkerPool ``` ### Changing an existing Worker ```powershell .\Tentacle service --instance MyInstance --stop .\Tentacle configure --reset-trust .\Tentacle register-worker --instance MyInstance --server "https://" --server-comms-address "https://" --comms-style TentacleActive --apiKey "API-YOUR-KEY" --workerpool MyWorkerPool .\Tentacle service --instance MyInstance --start ``` ## Learn more For further reading on the installation and configuration of Tentacle: - [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) - [Windows Tentacles](/docs/infrastructure/deployment-targets/tentacle/windows) - [Linux Tentacles](/docs/infrastructure/deployment-targets/tentacle/linux) - [Tentacle command line](/docs/octopus-rest-api/tentacle.exe-command-line) - [register-with](/docs/octopus-rest-api/tentacle.exe-command-line/register-with) - [register-worker](/docs/octopus-rest-api/tentacle.exe-command-line/register-worker) # Running Tentacle under a specific user account Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/running-tentacle-under-a-specific-user-account.md Every process within a Tentacle is executed by the user account configured on the **OctopusDeploy Tentacle** service. By default, this is the **Local System** account. There are times when you might need to run the Tentacle under a specific user account, for instance: - Run a script that needs to be executed by a user with higher permissions. - Run a process that talks to a SQL database, and you want to use integrated authentication. To change this setting, go to **Services ➜ OctopusDeploy Tentacle ➜ Properties ➜ Log On**. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/images/3277918.jpg) ::: Making the user a local administrator will be the easiest path to full functionality. If this is not possible, the following table acts as a guide for the minimal permission set that Tentacle must have for successful operation. | Permission | Object | Reason | Applied with | | ------------ | ---------------------------------------- | ---------------------------------------- | ---------------- | | Full control | The Octopus "Home" folder, e.g. `C:\Octopus` | Tentacle stores logs, temporary data, and dynamic configuration in this folder. | Windows Explorer | | Read | The `HKLM\Software\Octopus\Tentacle` registry key | Tentacle determines the location of its configuration files from this key. | Regedit | | Full control | The `Octopus Tentacle` Windows Service | Tentacle must be able to upgrade and restart itself for remote administration. | SC.EXE | | Listen | Port **10933** | Tentacle accepts commands from Octopus on this port. | NETSH.EXE | Additional permissions will be necessary depending on the kind of deployments Tentacle will perform (e.g. IIS configuration and so-on). # Dynamic Worker pools Source: https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools.md Dynamic Worker pools provide a quick and easy way to use [workers](/docs/infrastructure/workers) for your deployments. They are a special type of [worker pool](/docs/infrastructure/workers/worker-pools) available on [Octopus Cloud](/docs/octopus-cloud). Dynamic workers are isolated virtual machines, created on-demand to run your deployments and runbook steps. They run on Azure and are created and managed by Octopus Cloud, which means you don't need to configure or maintain additional infrastructure. ## On demand A dynamic worker is created on demand and leased to an Octopus Cloud instance for a limited time before being destroyed. :::div{.info} Octopus Cloud will automatically destroy dynamic workers as soon as one of these conditions is met: - The worker has been idle for 60 minutes. - The worker has existed for 72 hours (3 days). Please reach out to our [support team](https://octopus.com/support) if you need these values to be adjusted for your instance. ::: Worker VMs are provisioned with at least 20GB of available disk space, which is persistent until the worker is destroyed. ## Isolated Each worker VM is provisioned exclusively to a specific customer, and is completely isolated from other customers. ## Dynamic Worker Images Each dynamic worker pool uses a specific worker image. This is a VM image which determines the operating system (and OS version) running on the worker. You can edit a dynamic worker pool to change which image is used. When you sign up to [Octopus Cloud](/docs/octopus-cloud) (or create a new [space](/docs/administration/spaces)), you automatically receive a worker pool for the `Ubuntu (default)` image, and a worker pool for the `Windows (default)` image. The full list of available worker images includes both specific operating system versions (e.g., `Ubuntu Linux 22.04`), and also generic "default" options such as `Ubuntu (default)`. Choosing the default option means that your worker will use the latest stable worker image when it is released. The current default images are: - `Ubuntu (default)` ➜ `Ubuntu Linux 22.04` - `Windows (default)` ➜ `Windows Server Core 2022` ### Choosing an Image The default image is a good option to choose if you are: - Running a simple script that doesn't require specific tools or operating system versions - Running a step [inside a container](/docs/projects/steps/execution-containers-for-workers) If you're writing a script that relies on a specific version of tooling (e.g., Helm), then we recommend using [execution containers for workers](/docs/projects/steps/execution-containers-for-workers) to run the script in a Docker container with the tool versions you need. Alternatively, you can choose a specific worker image, instead of the "default" options, to prevent worker image upgrades from impacting your deployments. | Type | Pros | Cons | |--------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Default (eg `Ubuntu (default)`) | Automatically uses the latest image. Deployments will continue to work even when a worker image is marked as deprecated or decommissioned. | The versions of dependencies (e.g. Helm) are not fixed; deployments that rely on specific versions of dependencies or operating system‑specific features may break during upgrades. | | Specific (e.g., `Ubuntu Linux 22.04`)| The version of the operating system and dependencies are fixed and can be relied upon. | When a worker image is marked as deprecated, warnings will start to appear in your deployment logs; when it is decommissioned, you will need to update your worker pool or deployments will fail. | ### Deprecation When an image is marked as deprecated, you will see warnings in the Octopus UI, and in the deployment log. After a suitable deprecation period, deployments will start to fail if they target an image that has hit end-of-life. When you start getting warnings in your deployments and/or see deprecation warnings in the Octopus portal, please plan to modify your worker pool to use a different image and test your scripts on the new image. If your Worker Pool is set to use the Operating System default, for example, `Ubuntu (default)`, the default will be swapped over to a new Operating System version by Octopus Deploy. Your deployments and runbooks will automatically use the new version. You should validate that your deployments and runbooks work with the new version prior to the cutover date. The new image will be made available prior to the cutover date and we will notify you of the cutover date to give you time to undertaking any required testing. ### Modifying the worker pool If the worker pool has been configured to specifically use a deprecated worker type, you will need to update the worker image on the worker pool. Navigate to your space, then go to **Infrastructure ➜ Worker Pools**. Worker pools with a deprecated worker image will show a `Deprecated` label next to the worker pool. Click the overflow menu (`...`) and click **Edit**. You can then select a new worker image from the dropdown list, such as `Ubuntu (default)` or a specific operating system version. ## Available Dynamic Worker Images Worker images are rebuilt on a regular basis, so that the operating system is up to date with the latest security patches. ### Ubuntu 22.04 This is the default for the Ubuntu operating system, referenced as `Ubuntu (default)`. Each `Ubuntu Server 22.04` worker is provisioned with a baseline of tools including (but not limited to): - .NET (8.0, 6.0) - Docker (latest) - PowerShell Core (latest) - Python 3 (latest) - GCloud CLI (550.0.0) :::div{.hint} Ubuntu workers are designed to use [execution worker containers](https://octopus.com/blog/execution-containers) for tooling like `kubectl` and `helm`. This makes it much easier to choose the appropriate runtime environment with the tools you need for your use case. ::: ### Windows Server Core 2022 This is the default for the Windows operating system, referenced as `Windows (default)`. Each `Windows Server Core 2022` worker is provisioned with a baseline of tools including (but not limited to): - .NET (8.0, 6.0) - .NET Framework 3.5 - .NET Framework 4.8 - AWS IAM Authenticator (0.7.10) - Chocolatey (latest) - Docker (latest) - Helm (3.19.4) - Kubectl (multiple versions) - Microsoft Service Fabric (10.1.2338.9590) - Microsoft Service Fabric SDK (7.1.2338) - Nuget CLI (latest) - Octopus Client (latest) - Pip (latest) - PowerShell Core (latest) - Python (3.14) - GCloud CLI (550.0.0) Windows 2022 workers are capable of running [execution worker containers](/docs/projects/steps/execution-containers-for-workers). :::div{.hint} We recommend execution containers as the preferred option for steps requiring external tools. This allows you to control which version of the tools will be used as your scripts will rely on a specific version that they are compatible with to function correctly. ::: ## kubectl on Windows Images Windows dynamic worker images come with many versions of `kubectl` available. A specific version can be used by [specifying a custom kubectl location](/docs/deployments/kubernetes/kubectl) of `c:\tools\kubectl\{{version}}\kubectl.exe`, where `{{version}}` is one of the following: - `1.32.12` - `1.33.8` - `1.34.4` - `1.35.1` ## Installing Software On Dynamic Workers Octopus does not recommend installing additional software on Dynamic Workers. By default, every dynamic worker is destroyed after it has been idle for 60 minutes or allocated for over 72 hours. Additionally, Octopus cannot guarantee that the dynamic worker leased to run one step will be the same worker leased to other executing steps in a deployment or runbook run. For deployments and runbook runs that require additional software dependencies on a dynamic worker, our recommendation is to leverage [execution containers for workers](/docs/projects/steps/execution-containers-for-workers). Octopus provides execution containers with a baseline of tools (`octopusdeploy/worker-tools`) pre-installed. These tools won't include every possible software combination you might need. If you require a specific set of software and tooling we recommend [building your own custom Docker images for use with execution containers](/docs/projects/steps/execution-containers-for-workers/#custom-docker-images). :::div{.hint} **Octopus worker-tools are cached on Dynamic Workers** The `octopusdeploy/worker-tools` images provided for the execution containers feature cache the five latest Ubuntu and two latest Windows [Worker Tools](/docs/infrastructure/workers/worker-tools-versioning-and-caching) images on a dynamic worker when it's created. This makes them an excellent choice over installing additional software on a dynamic worker. ::: If you choose to install additional software on a dynamic worker, you are responsible for: - Ensuring that software is installed at the start of each deployment or runbook run. - Writing the necessary scripts to download and install that software. - Verifying the latest version of the software works with the latest security patches of the host OS. - Handling any issues that arise if a different dynamic worker is leased to different steps in your deployment or runbook run. ## Learn more - [Worker blog posts](https://octopus.com/blog/tag/workers/1) - [Worker Tools, versioning and caching](/docs/infrastructure/workers/worker-tools-versioning-and-caching) # Ubuntu 18.04 End-of-life Source: https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools/ubuntu-1804-end-of-life.md :::div{.warning} Ubuntu 18.04 images are no longer available as of 3 April 2023. The details below are provided for historical reference. ::: Our Ubuntu dynamic workers are being upgraded to use Ubuntu 22.04, this upgrade will result in breaking changes for users of gcloud CLI and users of .NET Core 2.1/3.1 and Ubuntu 18.04 capabilities that are not offered by the updated replacements. ## What is changing? Due to the deprecation of Ubuntu 18.04, we are upgrading our dynamic workers to use Ubuntu 22.04. This change has also required upgrades of: * gcloud CLI from 339.0.0 to 367.0.0; and * .NET Core 2.1/3.1 to .NET 6. ## Who will be impacted? Users of Octopus Cloud using Ubuntu workers and running custom scripts or community steps may be impacted as there are breaking changes between Ubuntu 18.04 and Ubuntu 22.04, and breaking changes between .NET Core 2.1/3.1 and .NET 6. Cloud customers impacted by the GCloud CLI update will be those with a deployment process which: * Has a `Run gcloud in a Script` step, which runs on the `Hosted Ubuntu` Worker Pool, which does not use an `execution container`, and the script contains calls to `gcloud`; **OR** * Has a `Run a Script` step, which runs on the `Hosted Ubuntu` Worker Pool, and the script contains calls to `gcloud`. ### What do I need to do? Any impacted custom scripts will need to be updated to use Ubuntu 22.04 and tested to ensure your deployment process has not been impacted by the breaking changes. To mitigate the risk in this process we will be releasing the updated dynamic worker before the deprecation date so users can test against the new workers prior to migration. Please see the timeline below for the details. **Note:** All Octopus Deploy steps will work under Ubuntu 22.04 but some community steps may be impacted. ## Timeline **Update 7 February 2023** The `Ubuntu 22.04` image can be found within the configuration of a worker pool: :::figure ![Ubuntu 22.04 in worker image list](/docs/img/infrastructure/workers/dynamic-worker-pools/images/ubuntu-2204-worker-image-list.png) ::: **Octopus preparation** | Date | Details | |---------------|:--------------------------------------------------------------| | Q4 2022 | Octopus will produce and test an Ubuntu 22.04 worker image | | Jan 2023 | Internal testing of existing tooling to confirm compatibility | **Customer action required** | Date | Details | |-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 Feb 2023 | Ubuntu 22.04 dynamic worker will be made available for customers.
  • Customers should test their impacted deployments and runbooks on an Ubuntu 22.04 worker with the aim of completing testing by the 15th of March 2023
| | 15 Mar 2023 | Octopus will switch over the default "Hosted Ubuntu" worker pool to use the Ubuntu 22.04 worker.
  • If customers experience failed deployments or runbooks, they will be able to select the Ubuntu 18.04 worker until 3 April 2023 while they resolve any issues with running on an Ubuntu 22.04 worker | | 3 Apr 2023 | Ubuntu 18.04 dynamic workers will no longer be available on Octopus Cloud. | ## FAQ ### Why the deadline of 3 April 2023? Ubuntu 18.04 exits LTS support and will not be patched including any security vulnerability. Consequently, Octopus will not provide an unsupported Dynamic worker image. ### What are the breaking changes between Ubuntu releases? It is not possible to give a complete and definitive answer as this depends on your use cases. Therefore, please refer to the following release notes: * [18.04 to 20.04 release notes](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) * [20.04 to 22.04 release notes](https://discourse.ubuntu.com/t/jammy-jellyfish-release-notes/24668) ### What are the breaking changes between the .NET releases? It is not possible to give a complete and definitive answer as this depends on your use cases. Therefore, please refer to the following release notes: * [.NET Core 3.1 release notes](https://github.com/dotnet/core/tree/main/release-notes/3.1) * [.NET 5 release notes](https://github.com/dotnet/core/tree/main/release-notes/5.0) * [.NET 6 release notes](https://github.com/dotnet/core/tree/main/release-notes/6.0) * [.NET release types](https://learn.microsoft.com/en-us/dotnet/core/releases-and-support) ### What if I experience a breaking change but I can't remediate it in time? There is the option to provision your own worker with Ubuntu 18.04 and selecting its worker pool for your deployment process that experience the breaking change. ### Why is GCloud CLI part of this notification? Ubuntu 22.04 requires a later version of GCloud CLI. We have selected the earliest version on GCloud CLI that is compatible with Ubuntu 22.04 to minimize the number of breaking changes we expose our customers to. Customers can use the [GCloud 367.0.0 Release Notes](https://cloud.google.com/sdk/docs/release-notes#36700_2021-12-14) to assess whether their GCloud script steps are impacted by the breaking changes between GCloud versions 339.0.0 and 367.0.0. ### Are the Windows dynamic workers affected in any way? This change does not impact the Windows dynamic workers. ### How does this affect Execution Containers? Although Ubuntu 18.04 Docker images, along with [Worker Tools](/docs/infrastructure/workers/worker-tools-versioning-and-caching), can still operate on Ubuntu 22.04 dynamic workers, we will no longer provide support for the ubuntu.18.04 Worker Tools. Instead, we have introduced a new [ubuntu.22.04](https://hub.docker.com/r/octopusdeploy/worker-tools/tags?page=1&name=22.04) image, which is recommended moving forward. # Windows 2019 End-of-life Source: https://octopus.com/docs/infrastructure/workers/dynamic-worker-pools/windows-2019-end-of-life.md :::div{.warning} Windows Server 2019 images are no longer available as of 9 January 2024. The details below are provided for historical reference. ::: Our Windows Server 2019 Dynamic Workers are being upgraded to use Windows Server 2022, this may result in breaking changes for users of community steps or custom scripts. ## What is changing? Due to LTSC Windows Server 2019 ending on 9 January 2024 , we are upgrading our dynamic workers to use Windows Server 2022. ## Who will be impacted? Users of Octopus Cloud using Windows Dynamic Workers (`Windows (default)` and `Windows Server Core 2019` images) and running custom scripts or community steps may be impacted as there are **breaking changes between Windows 2019 and Windows 2022**. Should any additional components be identified as having breaking changes we will endeavour to inform you via email and Octopus community Slack. Steps running execution containers on Windows Dynamic Workers may also be impacted as Windows containers can generally only run when the container base image OS version matches the host OS version. This means the Windows 2019 container image you are currently using will likely fail to run on a Windows 2022 Dynamic Worker. **Note:** All Octopus Deploy steps will work under Windows 2022 but some community and custom steps may be impacted. ## What do I need to do? To mitigate the risk in this process we will be releasing Windows 2022 Dynamic Workers before the deprecation date so users can test against the new workers prior to deprecation. Please see the timeline below for the details. If you are running custom scripts, using community steps, or using execution containers on Windows workers, we recommend following the [migration guide](#migration-guide) below to test your deployments on Windows 2022 Dynamic Workers. ## Alternate (recommended) course of action Unless you have a specific need for a Windows Dynamic Worker we recommend considering a change to an Ubuntu 22.04 based Dynamic Worker as Ubuntu 22.04 Dynamic Workers are more performant. Built in steps work on both Ubuntu and Windows Dynamic Workers with the exception of Windows specific steps. Community steps and custom step templates would also need testing. ## Timeline **Octopus preparation** | Date | Details | |---------------|:-------------------------------------------------------------------| | Oct 2023 | Octopus will produce and test a Windows 2022 Dynamic Worker image. | **Customer action required** | Date | Details | |-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 30 Oct 2023 | Windows 2022 Dynamic Worker will be made available for customers.
    • Customers should test their impacted deployments and runbooks on a Windows 2022 Dynamic Worker with the aim of completing testing by the 4th of December 2023
    | | 4 Dec 2023 | Octopus will update the `Windows (default)` image to resolve to Windows Server Core 2022.
    • If customers experience failed deployments or runbooks, they will be able to select the `Windows Server Core 2019` worker image until the 9th of January 2024 while they resolve any issues with running on a Windows 2022 Dynamic Worker | | 9 Jan 2024 | Windows 2019 Dynamic Workers will no longer be available on Octopus Cloud. | ## Migration Guide 1. For each Space on your Cloud instance, find the Windows Dynamic Worker Pool. The Worker Pool name is usually either `Hosted Windows` or `Default Worker Pool`. Make a note of the Worker Pool name. :::figure ![Windows Worker Pool](/docs/img/infrastructure/workers/dynamic-worker-pools/images/windows-2019-eol-windows-pool.png) ::: 1. For each deployment step, check whether it runs on the Windows Dynamic Worker Pool you noted in Step 1. :::figure ![Worker Pool Selection](/docs/img/infrastructure/workers/dynamic-worker-pools/images/windows-2019-eol-step-worker-pool.png) ::: 1. Create a temporary Dynamic Worker Pool targeting the `Windows Server Core 2022` image. :::figure ![Worker Pool Selection](/docs/img/infrastructure/workers/dynamic-worker-pools/images/windows-2019-eol-windows-2022-pool.png) ::: 1. Open the deployment process for your project as well as any Runbooks. Make note of any steps using execution containers (these will display a `Runs in a container` chip) as these will need additional updates. :::figure ![Deployment Process](/docs/img/infrastructure/workers/dynamic-worker-pools/images/windows-2019-eol-deployment-process.png) ::: 1. For each step that runs on a Windows Dynamic Worker Pool - Change its Worker Pool to the new Windows 2022 Worker Pool you created in Step 3. - If the step runs in an execution container, change the container image to the Windows 2022 image that corresponds to your current Windows 2019 image. If your image is [multi-platform](https://docs.docker.com/build/building/multi-platform/), it's still prudent to check that the image still works as expected under Windows 2022. - Your step should look something like this: :::figure ![Worker Pool Selection](/docs/img/infrastructure/workers/dynamic-worker-pools/images/windows-2019-eol-step-container-image.png) ::: 1. Test your deployment by deploying a new Release of your project (Snapshot for a Runbook) ### Optional cleanup after 9 January 2024 To avoid having two Worker Pools that both yield the same Workers, you can restore the steps back to using the original Windows Dynamic Worker Pool: 1. For each step that you migrated, change the Worker Pool back to the original Windows Dynamic Worker Pool, which should be now running Windows 2022 Dynamic Workers. 1. Once no steps are using the temporary Windows 2022 Worker Pool, you can delete the temporary Worker Pool. ## FAQ ### Why the deadline of 9 January 2024? Windows 2019 exits LTSC support and will not be patched including any security vulnerability. Consequently, Octopus will not provide an unsupported Dynamic Worker image. ### What are the breaking changes between Windows 2019 and Windows 2022 releases? It is not possible to give a complete and definitive answer as this depends on your use cases. Therefore, please refer to the following documents: - [Comparison of Standard, Datacenter, and Datacenter: Azure Edition editions of Windows Server 2022](https://learn.microsoft.com/en-us/windows-server/get-started/editions-comparison-windows-server-2022?tabs=full-comparison) - [What's new in Windows Server 2019](https://learn.microsoft.com/en-us/windows-server/get-started/whats-new-in-windows-server-2019) - [What's new in Windows Server 2022](https://learn.microsoft.com/en-us/windows-server/get-started/whats-new-in-windows-server-2022) - [Features removed or no longer developed starting with Windows Server 2022](https://learn.microsoft.com/en-us/windows-server/get-started/removed-deprecated-features-windows-server-2022) ### What if I experience a breaking change but I can’t remediate it in time? There is the option to provision your own worker with Windows Server 2019 and selecting its worker pool for your deployment processes that experience the breaking change. ### How does this affect Execution Containers? Windows containers can generally only run when the container base image OS version matches the host OS version. Please follow the [migration guide](#migration-guide) to make the transition as smooth as possible. ### Are the Ubuntu 22.04 Dynamic Workers affected in any way? This change does not impact the Ubuntu 22.04 Dynamic Workers. # Automated Installation Source: https://octopus.com/docs/infrastructure/workers/kubernetes-worker/automated-installation.md ## Automated installation via Terraform The Kubernetes Worker can be installed and managed using a combination of the [Helm chart >= v2.2.1](https://hub.docker.com/r/octopusdeploy/kubernetes-agent), [Octopus Deploy >= v0.30.0 Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest) and/or [Helm Terraform provider](https://registry.terraform.io/providers/hashicorp/helm). ### Octopus Deploy & Helm Using a combination of the Octopus Deploy and Helm providers you can completely manage the Kubernetes Worker via Terraform. :::div{.info} To ensure that the Kubernetes Worker is correctly installed in Octopus, certain criteria must hold for the following Terraform resource properties: | **Kubernetes Worker resource** | | **Helm resource (chart value)** | | -------------------------------------------------- | --------------------------------------------------------- | ------------------------------- | | `octopusdeploy_kubernetes_agent_worker.name` | must be the same value as | `agent.name` | | `octopusdeploy_kubernetes_agent_worker.uri` | must be the same value as | `agent.serverSubscriptionId` | | `octopusdeploy_kubernetes_agent_worker.thumbprint` | is the thumbprint calculated from the certificate used in | `agent.certificate` | ::: :::div{.warning} Always specify the major version in the **version** property on the **helm_release** resource (e.g. `version = "2.*.*"`) to prevent Terraform from defaulting to the latest Helm chart version. This is important, as a newer major version of the Kubernetes Worker Helm chart could introduce breaking changes. When upgrading to a new major version of the Kubernetes Worker, create a separate resource to ensure the Helm values match the updated schema. [Automatic upgrade support](/docs/kubernetes/targets/kubernetes-agent/upgrading#automatic-updates-coming-in-20234) is expected in version 2023.4. ::: ```ruby terraform { required_providers { octopusdeploy = { source = "OctopusDeployLabs/octopusdeploy" version = "0.30.0" } helm = { source = "hashicorp/helm" version = "2.13.2" } } } locals { octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://myinstance.octopus.app" octopus_polling_address = "https://polling.myinstance.octopus.app" } provider "helm" { kubernetes { # Configure authentication for me } } provider "octopusdeploy" { address = local.octopus_address api_key = local.octopus_api_key } resource "octopusdeploy_space" "worker_space" { name = "worker space" space_managers_teams = ["teams-everyone"] } resource "octopusdeploy_polling_subscription_id" "agent_subscription_id" {} resource "octopusdeploy_tentacle_certificate" "agent_cert" {} resource "octopusdeploy_static_worker_pool" "workerpool_example" { name = "Example" space_id = octopusdeploy_space.worker_space.id description = "An example worker pool" is_default = false sort_order = 3 } resource "octopusdeploy_kubernetes_agent_worker" "worker" { name = "worker-one" space_id = octopusdeploy_space.worker_space.id worker_pool_ids = [octopusdeploy_static_worker_pool.workerpool_example.id] thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri } resource "helm_release" "kubernetes_worker" { name = "octopus-kubernetes-worker-release" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/kubernetes-agent" version = "2.*.*" atomic = true create_namespace = true namespace = "octopus-agent-worker" set { name = "agent.acceptEula" value = "Y" } set { name = "agent.name" value = octopusdeploy_kubernetes_agent_worker.worker.name } set_sensitive { name = "agent.serverApiKey" value = local.octopus_api_key } set { name = "agent.serverUrl" value = local.octopus_address } set { name = "agent.serverCommsAddress" value = local.octopus_polling_address } set { name = "agent.serverSubscriptionId" value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri } set_sensitive { name = "agent.certificate" value = octopusdeploy_tentacle_certificate.agent_cert.base64 } set { name = "agent.space" value = octopusdeploy_space.worker_space.name } set { name = "agent.worker.enabled" value = "true" } set_list { name = "agent.worker.initial.workerPools" value = octopusdeploy_kubernetes_agent_worker.worker.worker_pool_ids } } ``` ### Helm The Kubernetes Worker can be installed using just the Helm provider alone. However, the associated worker that is created in Octopus cannot be managed solely using the Helm provider. This is because the Helm chart values relating to the worker are only used on initial installation. Any further modifications to them will not trigger an update to the worker unless you perform a complete reinstall of the worker. If you don't intend to manage the Kubernetes Worker configuration through Terraform (choosing to handle it via the Octopus Portal or API instead), this option will be beneficial to you as it is simpler to set up. ```ruby terraform { required_providers { helm = { source = "hashicorp/helm" version = "2.13.2" } } } provider "helm" { kubernetes { # Configure authentication for me } } locals { octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://myinstance.octopus.app" octopus_polling_address = "https://polling.myinstance.octopus.app" } resource "helm_release" "kubernetes_worker" { name = "octopus-kubernetes-worker-release" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/kubernetes-agent" version = "2.*.*" atomic = true create_namespace = true namespace = "octopus-agent-worker" set { name = "agent.acceptEula" value = "Y" } set { name = "agent.name" value = "octopus-worker" } set_sensitive { name = "agent.serverApiKey" value = local.octopus_api_key } set { name = "agent.serverUrl" value = local.octopus_address } set { name = "agent.serverCommsAddress" value = local.octopus_polling_address } set { name = "agent.space" value = "Default" } set { name = "agent.worker.enabled" value = "true" } set_list { name = "agent.worker.initial.workerPools" value = ["WorkerPools-1"] } } ``` # Worker Tools, Versioning and Caching Source: https://octopus.com/docs/infrastructure/workers/worker-tools-versioning-and-caching.md Worker Tools are a set of Docker images used as [execution containers for workers](https://octopus.com/docs/projects/steps/execution-containers-for-workers) to run deployment processes. Worker Tools include a wide range of software tools to support most deployment scenarios out of the box. This page focuses on how we create these Worker Tool images, version, cache on workers, and release them. ## Versioning Worker Tools Worker Tool images follow a semantic versioning (SemVer) approach of `Major.Minor.Patch-Distro` for their tag format. When we release a new version of Worker Tools to the [Worker Tools Docker Hub repository](https://hub.docker.com/r/octopusdeploy/worker-tools/tags), we also add the following image tags, distribution (`ubuntu.22.04` or `windows.ltsc2022`), `Major-Distro` (e.g. `3-Distro`) and `Major.Minor-Distro` (`3.3-Distro`). We recommend using the fully qualified SemVer as patch updates of Worker Tools could result in an updated tool dependency introducing a breaking change. The Worker Tools Dockerfiles use a combination of tools pinned to specific versions, such as CLI tools and Frameworks, while other tools pull their latest available release. For Ubuntu, these are pulled with apt-get, and for Windows, chocolatey. You can find the full details of these tools in the Docker files for [Windows](https://github.com/OctopusDeploy/WorkerTools/blob/master/windows.ltsc2022/Dockerfile) and [Ubuntu](https://github.com/OctopusDeploy/WorkerTools/blob/master/ubuntu.22.04/Dockerfile) Worker Tools. The tools pulling their latest releases for Ubuntu include `wget`, `python3-pip`, `groff`, `unzip`, `apt-utils`, `curl`, `software-properties-common`, `jq`, `yq`, `openssh-client`, `rsync`, `git`, `augeas-tools`, `maven`, `gradle`, `Node 14`, `istioctl`, `linkerd`, `umoci`. For Windows these include `chocolatey` and `dotnet 6.0.*`. If your steps depend on any of these packages, we recommend you target the fully qualified SemVer of Worker Tools. Otherwise, our additional Worker Tools tags may be suitable for your use case. We version our releases as follows: Major update - Update of a pinned tools major version - Any update of a pinned tool with a 0 Major version i.e. `0.*.*` - Removal of a tool Minor Update - Update of a pinned tools minor version - Addition of a new tool Patch update - Update of a pinned tools Patch version - Any new release, the latest tools will be updated automatically In short, we recommend using the full `octopusdeploy/worker-tools:Major.Minor.Patch-Distro` tag format. Depending on your use case, the latest releases, `octopusdeploy/worker-tools:ubuntu.22.04` and `octopusdeploy/worker-tools:windows.ltsc2022` respectively or `octopusdeploy/worker-tools:Major-distro`, `octopusdeploy/worker-tools:Major.Minor-Distro` may be suitable for you. ## Caching Worker Tools Worker Tools are cached on dynamic workers to help improve the performance of deployments. Windows workers cache the latest two sets of Worker Tools while Ubuntu workers cache the latest three. To understand this cache, it's important to understand a worker's life cycle. Workers are acquired from a dynamic worker pool and leased to a single cloud instance. They are allocated in a round robin fashion to individual deployment steps, storing packages, Docker images, and other data on disk. Workers are destroyed after either the worker has been idle for 60 minutes or has existed for 72 hours (3 days). When a new worker is acquired, if a new set of Worker Tools has been released, the worker will no longer have the oldest version of Worker Tools, and any other images pulled on the old worker. This is important for the performance of deployments as pull times for uncached Worker Tools are ~1.5 minutes for Ubuntu and ~20 minutes for Windows. We recommend updating to the latest set of Worker Tools available to avoid these pull times. By Caching multiple versions of Worker Tools when using the latest version, any new release of Worker Tools will not result in degraded deployment performance. To update to the latest set of Worker Tools select the "Use latest Distro-based image" :::figure ![The container image settings](/docs/img/infrastructure/workers/images/container-selector.png) ::: ## Currently Cached Worker Tools **Octopus worker-tools cached on Dynamic Workers** The `octopusdeploy/worker-tools` images provided for the execution containers feature cache the three latest Ubuntu and two latest Windows images on a Dynamic Worker when it's created. This makes them an excellent choice over installing additional software on a Dynamic Worker. [View the latest versions of worker tools on DockerHub](https://hub.docker.com/r/octopusdeploy/worker-tools). Using non-cached versions of these worker-tools can result in long downloads. ## Learn more - [Worker blog posts](https://octopus.com/blog/tag/workers/1) - [Custom Docker images](/docs/projects/steps/execution-containers-for-workers/#custom-docker-images) # Configure and apply Kubernetes resources Source: https://octopus.com/docs/kubernetes/steps/kubernetes-resources.md Octopus supports the deployment of Kubernetes resources through the `Configure and apply Kubernetes resources` step. This step exposes a UI that builds up a [Kubernetes Deployment resource](https://oc.to/KubernetesDeploymentResource), a [Service resource](https://oc.to/KubernetesServiceResource), and an [Ingress resource](https://oc.to/KubernetesIngressResource). The combination of these resources represents an opinionated view about what makes up a typical Kubernetes deployment. ## Configure and apply Kubernetes resources step To begin, add the `Configure and apply Kubernetes resources` step to a project. This step has three important sections that make up the combined objects that are deployed to Kubernetes. The first section is the `Deployment`. This section is used to build of the [Deployment resource](https://oc.to/KubernetesDeploymentResource). The second feature is the `Service`. This feature is used to build a [Service resource](https://oc.to/KubernetesServiceResource). The third feature is the `Ingress`. This feature is used to build a [Ingress resource](https://oc.to/KubernetesIngressResource). :::figure ![Deploy Container Resources](/docs/deployments/kubernetes/deploy-container/deploy-container.svg) ::: :::div{.hint} Kubernetes terminology overlaps with a number of general concepts in Octopus. For example, Kubernetes has the notion of a Deployment, which is distinct from the act of performing a deployment in Octopus. To distinguish between Kubernetes and Octopus terminology, we will reference Kubernetes "resources" e.g. a Deployment resource or Pod resource. ::: ### Deployment A Deployment resource provides a declarative interface for a [Pod resource](https://oc.to/KubernetesPodResource) and a [ReplicaSet resource](https://oc.to/KubernetesReplicaSetResource). A Pod resource in turn configures one or more [Containers resources](https://oc.to/KubernetesContainer). Container resources reference a Docker container image and provide all the additional configuration required for Kubernetes to deploy, run, expose, monitor, and secure the Docker container. A ReplicaSet resource monitors the Pod resources to ensure that the required number of instances are running. ### Deployment name Each Deployment resource requires a unique `Deployment Name`. Kubernetes resources are identified by their names, so the name must be unique in the target [namespace](https://oc.to/KubernetesNamespace). When using the blue/green deployment strategy, the name entered in this field will be used as the base for the Deployment resource name. The Octopus deployment ID will then be appended to the name to ensure the blue and green Deployment resources have unique names. ### Replicas The desired number of Pod resources is set in the `Replicas` field. This is the number of replicas maintained by the ReplicaSet resource. This field is optional, and will default to a value of `1`. ### Revision history limit The number of revisions of the resource that Kubernetes will maintain is set in the `Revision history limit` field. This field was added in Octopus 2020.6. ### Progression deadline An optional value that defines the maximum time in seconds for a deployment to make progress before it is considered to be failed. If this value is not specified, it will default to `600` seconds (or 10 minutes). This value affects [Blue/Green deployments](#blue-green-deployment-strategy), which will point the service to the new deployment only after the new deployment has succeeded. ### Pod termination grace period An optional value that defines how long Kubernetes will wait for the Pod resource to shutdown before it is killed. See the [Kubernetes documentation](https://oc.to/KubernetesPodTermination) for more details. ### Add label Labels are custom key/value pairs that are assigned to Kubernetes resources. The labels defined in the `Deployment` section are applied to the Deployment, Pod, Service, Ingress, ConfigMap and Secret resources. The labels are optional, as Octopus will automatically add the tags required to manage the Kubernetes resources created as part of this step. ### Completions :::div{.hint} This field is used when creating Kubernetes `Job` resources only. ::: `completions` is an optional value to specify how many pods to be initiated, one after the other. ### Parallelism :::div{.hint} This field is used when creating Kubernetes `Job` resources only. ::: `parallelism` is an optional value to specify how many pods should run in parallel when the job is started. ### Backoff limit :::div{.hint} This field is used when creating Kubernetes `Job` resources only. ::: `backoffLimit` is an optional value that can be used to limit the number of pods creating continuously in case the job fails. ### Active deadline seconds :::div{.hint} This field is used when creating Kubernetes `Job` resources only. ::: `activeDeadlineSeconds` is an optional value that determines how many seconds should the job runs. The job will be terminated if it runs longer than the given time provided in this field. ### TTL Seconds After Finished :::div{.hint} This field is used when creating Kubernetes `Job` resources only. ::: `ttlSecondsAfterFinished` is an option value to specify when the job should be cleaned up after it is executed. This is handled by the `TTL Controller`. When the TTL controller cleans up a resource, it will cascade-delete, which means it deletes its dependent objects together with it. ### Deployment strategy Kubernetes exposes two native deployment strategies: [Recreate](https://oc.to/KubernetesRecreateStrategy) and [Rolling Update](https://oc.to/KubernetesRollingStrategy). When deploying containers with this step, Octopus supports a third deployment strategy called blue/green. :::div{.hint} Deployment strategies are not applicable to Kubernetes `Job` resources. ::: ### Recreate deployment strategy The first native deployment strategy is the [Recreate](https://oc.to/KubernetesRecreateStrategy) deployment. This strategy will kill all existing Pod resources before new Pod resources are created. This means that only one Pod resource version is exposed at any time. This can result in downtime before the new Pod resources are fully deployed. ### Rolling update deployment strategy The second native deployment strategy is the [Rolling Update](https://oc.to/KubernetesRollingStrategy) deployment. This strategy will incrementally replace old Pod resources with new ones. This means that two Pod resource versions can be deployed and accessible at the same time but can be performed in a way that results in no downtime. ### Blue/Green deployment strategy {#blue-green-deployment-strategy} The third deployment strategy, Blue/Green, is not a native concept in Kubernetes. It is a deployment strategy that is achieved by the `Configure and apply Kubernetes resources` step because it creates and coordinates both the Deployment resource and the Service resources. The Blue/Green deployment strategy involves four phases. #### Phase 1 The first phase is the state of the existing Deployment and Service resources. If a previous Octopus deployment was performed, there will be both a Deployment and a Service resource created in Kubernetes. These resources have tags like `Octopus.Step.Id` and `Octopus.Deployment.Id` that identify the Octopus step and specific deployment that created the resources (these tags are added automatically by Octopus). This existing Deployment resource is considered to be the green half of the blue/green deployment. :::figure ![Phase 1](/docs/deployments/kubernetes/deploy-container/phase1.svg) ::: #### Phase 2 The second phase involves creating the new Deployment resource. This new resource is considered to be the blue half of the blue/green deployment. It is important to note that the new Deployment resource is a completely new resource in Kubernetes. The existing green Deployment resource is not updated. Because the names of distinct resources must be unique in Kubernetes, Octopus will append the Octopus deployment ID to the Deployment resource name. So if the Deployment resource name was defined as `my-application` in the step, the resulting Deployment resource name would look something like `my-application-deployment-1232`. At the end of Phase 2 there are three resources in Kubernetes: the green Deployment resource, the Blue Deployment resource, and the Service resource which is still pointing at the green Deployment resource. :::figure ![Phase 2](/docs/deployments/kubernetes/deploy-container/phase2.svg) ::: #### Phase 3 The third phase involves waiting for the blue Deployment resource to be ready. Octopus executes the command `kubectl rollout status "deployment/blue-deployment-name"`, which will wait until the newly created blue Deployment resource is ready. For a Deployment resource to be considered ready, it must have been successfully created, and any Container resource [readiness probes](https://oc.to/KubernetesProbes) must have successfully completed. :::div{.hint} The [progression deadline](#progression-deadline) field can be used to limit how long Kubernetes will wait for a deployment to be successful. ::: If the Deployment resource was successfully created, we move to phase 4. If the Deployment resource was not successfully created, the deployment process stops with an error and leaves the service pointing to the green Deployment resource. :::figure ![Phase 3](/docs/deployments/kubernetes/deploy-container/phase3.svg) ::: #### Phase 4 If the Deployment resource was successfully created, Octopus will execute the final phase which involves pointing the Service resource to the blue Deployment resource, and deleting any old Deployment resources. At the beginning of Phase 4 there are three resources in Kubernetes: the green Deployment resource, the blue Deployment resource (now completely deployed and ready to accept traffic), and the Service resource which is still pointing at the green Deployment resource. Octopus now updates the Service resource to direct traffic to the blue Deployment resource. Once the Service resource is updated, any old Deployment, ConfigMap and Secret resources are deleted. Old resources are defined as any Deployment resource with an `Octopus.Step.Id`, `Octopus.Environment.Id` and `Octopus.Deployment.Tenant.Id` label that matches the Octopus step that was just deployed, and a `Octopus.Deployment.Id` label that does not match the ID of the deployment that was just completed. :::div{.hint} If the deployment fails at phase 3, the Kubernetes cluster can be left with multiple Deployment resources in a failed state. Because Deployment resources with an `Octopus.Deployment.Id` label that does not match the current deployment are deleted in phase 4, a successful deployment will remove all previously created Deployment resource objects. This means failed deployments can be retried, and once successful, all previous Deployment resources will be cleaned up. ::: :::figure ![Phase 4](/docs/deployments/kubernetes/deploy-container/phase4.svg) ::: #### Deployment strategy summary The choice of which deployment strategy to use is influenced by a number of factors: 1. Does the deployment require no downtime? 2. Can multiple versions of the Deployment resource coexist, even if different versions can not receive external traffic? This may not be possible if the act of deploying a new Deployment resource results in incompatible database upgrades. 3. Can multiple versions of the Deployment resource accept traffic at the same time? This may not be possible if APIs have changed in ways that are not backward compatible. 4. Do the containers resources reference other resources that can not be shared? Container resources may reference resources like volume claims that can not be mounted in multiple containers. | Strategy | No Downtime | Multiple Deployed Versions | Multiple Accessible Versions | Require Shared Resources | |-|-|-|-| - | | Recreate | | | | | | Rolling Update | * | * | * | * | | Blue/Green | * | * | | * | #### Wait for deployment to succeed When using the Recreate or Rolling update deployment strategy, you have the option to wait for the deployment to succeed or not before the step completes. A completed deployment means all liveness checks passed, the rollout succeeded and all Pod resources have been updated. :::div{.success} The Blue/Green deployment strategy always waits for the rollout to succeed, as this is the point at which the Service resource is modified to point to the new Deployment resource. ::: ### Volumes [Volume resources](https://oc.to/KubernetesVolumes) allow external data to be accessed by a Container resource via its file system. Volume resources are defined in the `Volumes` section, and later referenced by the container configuration. The volumes can reference externally managed storage, such as disks hosted by a cloud provider, network shares, or ConfigMap and Secret resources that are created outside of the step. The volumes can also reference ConfigMap and Secret resources created by the step. When created by the step, new ConfigMap and Secret resources are always created as new resources in Kubernetes with each deployment and their unique names are automatically referenced by the Deployment resource. This ensures that deployments see the data in their associated ConfigMap or Secret resource, and new deployments don't leave old deployments in an undefined state by overwriting their data. Once a deployment has successfully completed, old Secret and ConfigMap resources created by the step will be removed. When configuring ConfigMap and Secret volumes types, an optional Default Mode can be specified to tell Kubernetes what file permissions to apply to the mounted volume. These are specified in a standard Unix-style octal format. E.g. `0644` **Note:** Kubernetes converts and stores Octal permission values to Decimals when applying. Other areas of Octopus UI will reflect this conversion, but editing remains in the more broadly adopted Octal format. Kubernetes provides a wide range of Volume resource types. The common, cloud-agnostic Volume resource types can be configured directly by Octopus. Other Volume resource types are configured as raw YAML. #### Common values All Volume resources must have a unique name defined in the `Name` field. #### ConfigMap The [ConfigMap Volume resource](https://oc.to/KubernetesConfigMapVolume) exposes the data saved in a [ConfigMap resource](https://oc.to/KubernetesConfigMap) as files in a container. The `ConfigMap name` field defines the name of the ConfigMap resource that is to be exposed. Individual ConfigMap resource values can be optionally mapped to custom files by adding them as items. The item `Key` is the name of the ConfigMap resource key. The item `Path` is the name of the file that the ConfigMap value will be placed in. For example, consider a ConfigMap secret resource created with the following YAML. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.level: very special.type: charm ``` To mount this ConfigMap as a volume, the `ConfigMap name` would be set to `special-config`. To expose the `special.level` key as a file called `my-special-level.txt`, an item is added with the `Key` of `special.level` and a `Path` of `my-special-level.txt`. If this Volume resource is mounted by a container under the directory `/data`, the file `/data/my-special-level.txt` would have the contents of `very`. #### Secret The [Secret Volume resource](https://oc.to/KubernetesSecretVolume) exposes the data saved in a [Secret resource](https://oc.to/KubernetesSecretResource) as files in a container. The `Secret name` field defines the name of the Secret resource that is to be exposed. Individual Secret resource values can be optionally mapped to custom files by adding them as items. The item `Key` is the name of the Secret resource key. The item `Path` is the name of the file that the Secret value will be placed in. For example, consider a Secret resource created with the following YAML. ```yaml apiVersion: v1 kind: Secret metadata: name: my-secret type: Opaque data: username: admin password: MWYyZDFlMmU2N2Rm ``` To mount this Secret as a volume, the `Secret name` would be set to `my-secret`. To expose the `username` key as a file called `username.txt`, an item is added with the `Key` of `username` and a `Path` of `username.txt`. If this Volume resource is mounted by a container under the directory `/data`, the file `/data/username.txt` would have the contents of `admin`. #### Empty dir The [Empty Dir Volume resource](https://oc.to/KubernetesEmptyDirVolume) is used to create volume that is initially empty. The volume can be shared between containers. Some uses for an Empty Dir Volume resource are: * Scratch space, such as for a disk-based merge sort. * Check-pointing a long computation for recovery from crashes. * Holding files that a content-manager Container fetches while a web server Container serves the data. By default, Empty Dir Volumes resources are stored on whatever medium is backing the node. Setting the `Medium` field to `Memory` will create the volume in a tmpfs, or RAM-backed filesystem. #### Host path The [Host path volume resource](https://oc.to/KubernetesHostPathVolume) mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications. For example, some uses for a Host Path Volume resource are: * Running a Container that needs access to Docker internals; use a hostPath of `/var/lib/docker`. * Running cAdvisor in a Container; use a hostPath of `/sys`. * Allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as. The `Path` field is required and is set to the file or directory on the node's host filesystem that is to be exposed to the container. The `Type` field is optional and has the supported values: |Value | Behavior | |-|-| | | Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume.| |DirectoryOrCreate| If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet.| |Directory | A directory must exist at the given path| |FileOrCreate | If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet.| |File | A file must exist at the given path| |Socket| A UNIX socket must exist at the given path| |CharDevice| A character device must exist at the given path| |BlockDevice| A block device must exist at the given path| #### Persistent volume claim The [Persistent Volume Claim volume resource](https://oc.to/KubernetesPersistentVolumeClaimVolume) is used to mount a PersistentVolume into a Pod. [PersistentVolume resources](https://oc.to/KubernetesPersistentVolumes) are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. The `Persistent Volume Claim Name` field must be set to the name of the PersistentVolumeClaim resource to be used. For example, consider a PersistentVolumeClaim resource created with the following YAML: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi ``` The `Persistent Volume Claim Name` field would be set to `mysql-pv-claim`. #### Raw YAML Kubernetes supports a huge range of volume resources, and only a small number are exposed directly by the step user interface. Other volume resources can be defined as raw YAML. The YAML entered must only include the details of the specific volume resource, and not include fields like `name`. For example, consider this example YAML provided by the Kubernetes documentation for the [AWS EBS volume resource](https://oc.to/KubernetesAwsEbsVolume) type: ```yaml apiVersion: v1 kind: Pod metadata: name: test-ebs spec: containers: - image: registry.k8s.io/test-web-server name: test-container volumeMounts: - mountPath: /test-ebs name: test-volume volumes: - name: test-volume awsElasticBlockStore: volumeID: fsType: ext4 ``` The YAML from this example that can be included in the `Raw YAML` field is the `awsElasticBlockStore` key, meaning the YAML entered into the field is this: ```yaml awsElasticBlockStore: volumeID: fsType: ext4 ``` ### Containers The `Containers` section is where the Container resources are defined. This is where the bulk of the configuration for the Deployment resource is found. The configuration options for a Container resource are broken down into a number of sections. #### Image details Each Container resource must reference a container image from a [Docker feed](/docs/packaging-applications/package-repositories/docker-registries). The container image must have a name that consists of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character. The image is then selected from one of those available from the Docker feed. If the Docker feed requires authentication, Octopus will automatically generated the [required Secret resource](https://oc.to/KubernetesPrivateRegistry) as part of the deployment. #### Ports Each Container resource can expose multiple ports. The port `Name` is optional. If it is specified, Service resources can refer to the port by its name. The `Port` number is required and must be a number between 1 and 65535. The `Protocol` is optional and will default to `TCP`. #### Image pull policy The image pull policy and the tag of the image affect when the kubelet attempts to pull the specified image. * `If Not Present`: the image is pulled only if it is not already present locally. * `Always`: the image is pulled every time the pod is started. * `Default` and either the image tag is `latest` or it is omitted: Always is applied. * `Default` and the image tag is present but not `latest`: If Not Present is applied. * `Never`: the image is assumed to exist locally. No attempt is made to pull the image. #### Container type To support configuring and initializing Pod resources, Kubernetes has the concept of an [Init Container resource](https://oc.to/KubernetesInitContainer). Init Container resources are run before App Container resources and are often used to run setup scripts. For example, an Init Container resource may be used to set the permissions on a directory exposed by a PersistentVolumeClaim volume resource before the App Container resource is launched. This is especially useful when you do not manage the App Container resource image, and therefore can't include such initialization directly into the image. Selecting the `Init container` check-box configures the Container resource as an Init Container resource. #### Resources Each Container resource can request a minimum allocation of CPU and memory resources and set a maximum resource limit. The requested resources must be available in the Kubernetes cluster, or else the Deployment resource will not succeed. The resource limits allow a Container resource to burst up to the defined limits. The `CPU Request` field defines the minimum CPU resources that the Container resource requires. The value is measured in [CPU units](https://oc.to/KubernetesCpuUnits). One CPU, in Kubernetes, is equivalent to: * 1 AWS vCPU * 1 GCP Core * 1 Azure vCore * 1 Hyperthread on a bare-metal Intel processor with Hyperthreading Fractional values are allowed. A Container that requests `0.5` cpu is guaranteed half as much CPU as a Container that requests `1` cpu. You can use the suffix `m` to mean milli. For example `100m` cpu, and `0.1` cpu are all the same. Precision finer than `1m` is not allowed. The `CPU Limit` field defines the maximum amount of CPU resources that the Container resource can use. The `Memory Request` field defines the minimum amount of memory that the Container resource requires. The memory resource is [measured in bytes](https://oc.to/KubernetesMemoryResourceUnits). You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent approximately the same value: * 128974848 * 129e6 * 129M * 123Mi The `Memory Limit` field defines the maximum amount of memory that can be consumed by the Container resource. #### Environment variables Environment variables can be set three ways. 1. Plain name/value pairs. These are defined by clicking the `Add Environment Variable` button. The `Name` is the environment variable name, and the `Value` is the environment variable value. 2. Expose a ConfigMap resource value as an environment variable. These are defined by clicking the `Add ConfigMap Environment Variable` button. The `Name` is the environment variable name. The `ConfigMap Name` is the name of the ConfigMap resource. The `Key` is the ConfigMap resource key whose value is to be set as the environment variable value. 3. Expose a Secret resource value as an environment variable. These are defined by clicking the `Add Secret Environment Variable` button. The `Name` is the environment variable name. The `Secret Name` is the name of the Secret resource. The `Key` is the Secret resource key whose value is to be set as the environment variable value. #### Volume mounts In the [Volumes](#volumes) section we defined the Volume resources that were exposed to the Container resource. It is here in the `Volume Mounts` container section that we map those Volume resources to the Container resource. Each Volume Mount requires a unique `Name`. The `Mount Path` is the path in the Container resource file system where the Volume resource will be mounted e.g. `/data` or `/etc/my-app/config`. The `Sub Path` field is optional, and can be used to mount a subdirectory exposed by the Volume resource. This is useful when a single Volume resource is shared between multiple Container resources, because it allows each Container resource to mount only the subdirectory it requires. For example, Volume resource may expose a directory structure like: ``` - webserver - content - database ``` A Container resource hosting a web server would specify the `Sub Path` to be `webserver/content`, while a Container resource hosting a database would specify the `Sub Path` of `database`. The `Read Only` field defines if the Volume resource is mounted in read only mode. :::div{.hint} Some Volume resources like ConfigMap and Secret are always mounted in read only mode, regardless of the setting in the `Read Only` field. See https://github.com/kubernetes/kubernetes/issues/62099 for more details. ::: #### Liveness probe The [Liveness probe resource](https://oc.to/KubernetesProbes) configures a health check that is executed against the Container resource to verify that it is currently operational. The `Failure threshold` defines how many times the probe can fail after the pod has been started. After this many failures, the pod is restarted. The default value is 3. The `Timeout` defines the number of seconds after which the probe times out. The default value is 1 second. The `Initial delay` defines the number of seconds to wait after the container has started before the probe is initiated. The `Period` defines how frequently in seconds the probe is executed. The default value is 10. The `Liveness probe type` defines the type of probe that is used to conduct the health check. Kubernetes supports three types of probes: * `Command`, newline-separated commands which are executed inside the container. If the return value is `0`, it is considered to be healthy. * `Http`, which will execute a HTTP GET operation against a URL. If the request returns a status code between 200 and 399 inclusive it is considered healthy. * `TCP Socket`, which will attempt to establish a connection against a TCP socket. If the connection can be established, it is considered healthy. #### Command The command probe type has one field, `Health check commands` that accepts a line break separated list of arguments. For example, if you want to run the command `/opt/healthcheck my_service "an argument with a space"`, you would enter the following text into the `Health check commands` field: ``` /opt/healthcheck my_service an argument with a space ``` #### Http The Http probe type has five fields. The `Host` field defines the host to connect to. If not defined, this value will default to the IP address of the Pod resource. The `Path` field defines the URL path that the HTTP GET request will be sent to. The `Scheme` field defines the scheme of the URL that is requested. If not defined, this value defines to `http`. The `Port` field defines the port that is requested. This value can be a number, like `80`, or a [IANA](https://oc.to/IANA) port name. Additional HTTP headers can be defined by clicking the `Add HTTP Header` button. The `Name` is the HTTP header name, and the `Value` is the header value. #### TCP socket The TCP Socket probe type has two fields. The `Host` field defines the host to connect to. If not defined, this value will default to the IP address of the Pod resource. The `Port` field defines the port that is requested. This value can be a number, like `80`, or a [IANA](https://oc.to/IANA) port name. #### Readiness probe The [Readiness probe resource](https://oc.to/KubernetesProbes) configures a health check that is executed against the Container resource to verify that it has started correctly. Readiness probes are not supported by Init Container resources. :::div{.hint} If defined, the readiness probe must succeed for a [Blue/Green](#blue-green-deployment-strategy) deployment to complete successfully. If the readiness probe fails, the Blue/Green deployment will halt at [phase 3](#phase-3). ::: The `Success threshold` defines many consecutive times the probe must succeed for the container to be considered successful after a failure. The default value is 1. The `Failure threshold` defines how many times the probe can fail after the pod has been started. After this many failures, the pod is marked Unready. The default value is 3. The `Timeout` defines the number of seconds to wait for a probe response. The default value is 1 second. The `Initial delay` defines the number of seconds to wait after the container has started before the probe is initiated. The `Period` defines how frequently in seconds the probe is executed. The default value is 10. The `Liveness probe type` defines the type of probe that is used to conduct the health check. Kubernetes supports three types of probes: * `Command`, which will execute a command inside the container. If the command returns `0`, it is considered to be healthy. * `Http`, which will execute a HTTP GET operation against a URL. If the request returns a status code between 200 and 399 inclusive it is considered healthy. * `TCP Socket`, which will attempt to establish a connection against a TCP socket. If the connection can be established, it is considered healthy. #### Command The command probe type has one field, `Health check commands`, that accepts a line break separated list of arguments. For example, if you want to run the command `/opt/healthcheck my_service "an argument with a space"`, you would enter the following text into the `Health check commands` field: ``` /opt/healthcheck my_service an argument with a space ``` #### Http The Http probe type has five fields. The `Host` field defines the host to connect to. If not defined, this value will default to the IP address of the Pod resource. The `Path` field defines the URL path that the HTTP GET request will be sent to. The `Scheme` field defines the scheme of the URL that is requested. If not defined, this value defaults to `http`. The `Port` field defines the port that is requested. This value can be a number, like `80`, or a [IANA](https://oc.to/IANA) port name. Additional HTTP headers can be defined by clicking the `Add HTTP Header` button. The `Name` is the HTTP header name, and the `Value` is the header value. #### TCP socket The TCP socket probe type has two fields. The `Host` field defines the host to connect to. If not defined, this value will default to the IP address of the Pod resource. The `Port` field defines the port that is requested. This value can be a number, like `80`, or a [IANA](https://oc.to/IANA) port name. #### Command The [command and arguments](https://oc.to/KubernetesCommand) that are executed when a Container resource is launched can be defined or overridden in the `Command` section. This section has two fields: `Command` and `Command arguments`. Each plays a slightly different role relating to how Docker images define the command that is used to launch the container. Docker images can define an [ENTRYPOINT](https://oc.to/DockerEntrypoint), a [CMD](https://docs.docker.com/reference/dockerfile/#cmd), or both. When both are defined, the CMD value is passed to the ENTRYPOINT. So if CMD is set to `["hello", "world"]` and ENTRYPOINT is set to `["print"]`, the resulting command would be `print hello world`. If the `Command` field is specified, it will override the value of the Docker image `ENTRYPOINT`. So if the `Command` was set to `echo`, the resulting command would be `echo hello world`. If the `Command arguments` field is specified, it will override the Docker image `CMD`. So if the `Command arguments` was set to `hello Octopus` then the resulting command would be `print hello Octopus`. Each of these fields accepts multiple arguments separated by line breaks. For example, if you want to run the command `/opt/my_app my_service "an argument with a space"`, you would enter the following text into the `Command` field: ``` /opt/my_app ``` And the following into the `Command arguments` field: ``` my_service an argument with a space ``` #### Startup probe The [Startup probe resource](https://oc.to/KubernetesProbes) configures a health check that must complete before the Liveness probe begins. This is useful to accommodate any initial delay in booting a container. :::div{.hint} If defined, the startup probe must succeed for a [Blue/Green](#blue-green-deployment-strategy) deployment to complete successfully. If the startup probe fails, the Blue/Green deployment will halt at [phase 3](#phase-3). ::: The `Success threshold` defines many consecutive times the probe must succeed for the container to be considered successful after a failure. The default value is 1. The `Failure threshold` defines how many times the probe can fail after the pod has been started. After this many failures, the pod is marked Unready. The default value is 3. The `Timeout` defines the number of seconds to wait for a probe response. The default value is 1 second. The `Initial delay` defines the number of seconds to wait after the container has started before the probe is initiated. The `Period` defines how frequently in seconds the probe is executed. The default value is 10. The `Startup probe type` defines the type of probe that is used to conduct the health check. Kubernetes supports three types of probes: * `Command`, which will execute a command inside the container. If the command returns `0`, it is considered to be healthy. * `Http`, which will execute a HTTP GET operation against a URL. If the request returns a status code between 200 and 399 inclusive it is considered healthy. * `TCP Socket`, which will attempt to establish a connection against a TCP socket. If the connection can be established, it is considered healthy. #### Command The command probe type has one field, `Health check commands`, that accepts a line break separated list of arguments. For example, if you want to run the command `/opt/healthcheck my_service "an argument with a space"`, you would enter the following text into the `Health check commands` field: ``` /opt/healthcheck my_service an argument with a space ``` #### Http The Http probe type has five fields. The `Host` field defines the host to connect to. If not defined, this value will default to the IP address of the Pod resource. The `Path` field defines the URL path that the HTTP GET request will be sent to. The `Scheme` field defines the scheme of the URL that is requested. If not defined, this value defaults to `http`. The `Port` field defines the port that is requested. This value can be a number, like `80`, or a [IANA](https://oc.to/IANA) port name. Additional HTTP headers can be defined by clicking the `Add HTTP Header` button. The `Name` is the HTTP header name, and the `Value` is the header value. #### TCP socket The TCP socket probe type has two fields. The `Host` field defines the host to connect to. If not defined, this value will default to the IP address of the Pod resource. The `Port` field defines the port that is requested. This value can be a number, like `80`, or a [IANA](https://oc.to/IANA) port name. #### Command The [command and arguments](https://oc.to/KubernetesCommand) that are executed when a Container resource is launched can be defined or overridden in the `Command` section. This section has two fields: `Command` and `Command arguments`. Each plays a slightly different role relating to how Docker images define the command that is used to launch the container. Docker images can define an [ENTRYPOINT](https://oc.to/DockerEntrypoint), a [CMD](https://docs.docker.com/reference/dockerfile/#cmd), or both. When both are defined, the CMD value is passed to the ENTRYPOINT. So if CMD is set to `["hello", "world"]` and ENTRYPOINT is set to `["print"]`, the resulting command would be `print hello world`. If the `Command` field is specified, it will override the value of the Docker image `ENTRYPOINT`. So if the `Command` was set to `echo`, the resulting command would be `echo hello world`. If the `Command arguments` field is specified, it will override the Docker image `CMD`. So if the `Command arguments` was set to `hello Octopus` then the resulting command would be `print hello Octopus`. Each of these fields accepts multiple arguments separated by line breaks. For example, if you want to run the command `/opt/my_app my_service "an argument with a space"`, you would enter the following text into the `Command` field: ``` /opt/my_app ``` And the following into the `Command arguments` field: ``` my_service an argument with a space ``` #### Pod security context The `Pod Security context` section defines the [container resource security context options](https://oc.to/KubernetesContainerSecurityContext). The `Allow privilege escalation` section controls whether a process can gain more privileges than its parent process. Note that this field is implied when the `Privileged` option is enabled. The `Privileged` section runs the container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. The `Read only root file system` section determines whether this container has a read-only root filesystem. The `Run as non-root` section indicates that the container must run as a non-root user. The `Run as user` section defines the UID to run the entrypoint of the container process. The `Run as group` section defines the GID to run the entrypoint of the container process. #### Pod annotations The `Pod Annotations` section defines the annotations that are added to the Deployment resource `spec.template.metadata` field. These annotations in turn are then applied to the Pod resource created by the Deployment resource. For example, consider the `Pod Annotations` defined in the screenshot below. :::figure ![](/docs/img/deployments/kubernetes/deploy-container/pod-annotations.png) ::: This will result in a Deployment resource YAML file something like this following. ```yaml apiVersion: apps/v1beta2 kind: Deployment metadata: name: httpd labels: Octopus.Deployment.Id: deployments-10341 Octopus.Step.Id: fead0da8-fd8a-4a03-9b70-5160cd378a8a Octopus.Environment.Id: environments-1 Octopus.Deployment.Tenant.Id: untenanted Octopus.Kubernetes.DeploymentName: httpd spec: replicas: 1 selector: matchLabels: Octopus.Kubernetes.DeploymentName: httpd template: metadata: labels: Octopus.Deployment.Id: deployments-10341 Octopus.Step.Id: fead0da8-fd8a-4a03-9b70-5160cd378a8a Octopus.Environment.Id: environments-1 Octopus.Deployment.Tenant.Id: untenanted Octopus.Kubernetes.DeploymentName: httpd annotations: podannotation: "annotation_value" spec: containers: - name: httpd image: index.docker.io/httpd:2.4.35 ports: - name: web containerPort: 80 securityContext: {} imagePullSecrets: - name: octopus-feedcred-feeds-dockerhub-with-creds strategy: type: Recreate ``` In particular `spec.template.metadata.annotations` field has been populated with the pod annotations. ```yaml spec: template: metadata: annotations: podannotation: "annotation_value" ``` When this Deployment resource is deployed to a Kubernetes cluster, it will create a Pod resource with that annotation defined. In the screenshot below you can see the YAML representation of the Pod resource created by the Deployment resource has the same annotations. :::figure ![](/docs/img/deployments/kubernetes/deploy-container/pod-annotation-deployed.png) ::: #### Deployment annotations The `Deployment Annotations` section defines the annotations that are added to the Deployment resource. For example, consider the `Pod Annotations` defined in the screenshot below. :::figure ![](/docs/img/deployments/kubernetes/deploy-container/deployment-annotation.png) ::: This will result in a Deployment resource YAML file something like this following. ```yaml apiVersion: apps/v1beta2 kind: Deployment metadata: name: httpd labels: Octopus.Deployment.Id: deployments-10342 Octopus.Step.Id: fead0da8-fd8a-4a03-9b70-5160cd378a8a Octopus.Environment.Id: environments-1 Octopus.Deployment.Tenant.Id: untenanted Octopus.Kubernetes.DeploymentName: httpd annotations: deploymentannotation: "annotation_value" spec: replicas: 1 selector: matchLabels: Octopus.Kubernetes.DeploymentName: httpd template: metadata: labels: Octopus.Deployment.Id: deployments-10342 Octopus.Step.Id: fead0da8-fd8a-4a03-9b70-5160cd378a8a Octopus.Environment.Id: environments-1 Octopus.Deployment.Tenant.Id: untenanted Octopus.Kubernetes.DeploymentName: httpd spec: containers: - name: httpd image: index.docker.io/httpd:2.4.35 ports: - name: web containerPort: 80 securityContext: {} imagePullSecrets: - name: octopus-feedcred-feeds-dockerhub-with-creds strategy: type: Recreate ``` In particular `metadata.annotations` field has been populated with the pod annotations. ```yaml metadata: annotations: deploymentannotation: "annotation_value" ``` ### Custom resources YAML When deploying a Kubernetes Deployment resource, it can be useful to have other Kubernetes resources tied to the Deployment resource lifecycle. The `Configure and apply Kubernetes resources` step already deploys ConfigMap and Secret resources in a tightly coupled fashion with their associated Deployment resource. Doing so means the containers in a Deployment resource can reliably reference a ConfigMap or Secret resource during an update, and will not be left in an inconsistent state where a new ConfigMap or Secret resource is referenced by an old Container resource. Once a Deployment resource is fully deployed and healthy, these old ConfigMap and Secret resources are cleaned up automatically. There are other resources that benefit from being part of this lifecycle. For example, a NetworkPolicy resource may be created with each deployment selecting the Pod resources that were part of the deployment. Or you may have custom resource definitions that are specific to your own local Kubernetes cluster. The `Custom resource YAML` section allows additional Kubernetes resources to participate in the lifecycle of the Deployment resource. It works like this: 1. You define the YAML of one or more Kubernetes resources in the code editor. The editor accepts multiple YAML documents separated by a triple dash e.g. ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: matchLabels: Octopus.Kubernetes.DeploymentName: "#{Octopus.Action.KubernetesContainers.ComputedDeploymentName}" policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: my-project - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 --- apiVersion: v1 data: allowed: '"true"' enemies: aliens lives: "3" kind: ConfigMap metadata: name: game-config-env-file ``` 2. During the deployment, each resource will be modified to ensure that it has a unique name, and includes the common labels that are applied to all other resources created as part of the step. For example, the name of NetworkPolicy resource will be changed from the value entered into the YAML of `test-network-policy` something like `test-network-policy-deployment-1234`. The NetworkPolicy resource will also have labels like `Octopus.Deployment.Id`, `Octopus.Deployment.Tenant.Id`, `Octopus.Environment.Id`, `Octopus.Kubernetes.DeploymentName` and `Octopus.Step.Id` applied. These labels allow Octopus to track the resource across deployments. 3. Once the deployment has succeeded, any old resources of the kinds that were defined in the `Custom resource YAML` field will be found and deleted. For example, any `NetworkPolicy` or `ConfigMap` resources in the target namespace created by a previous deployment will be deleted. By creating each custom resource with a unique name and common labels, Octopus will ensure that a new resource is created with each deployment, and old resources are cleaned up. This means that the custom resources are tightly coupled to a Deployment resource, and can be treated as a single deployment. :::div{.success} To deploy resources that are not bound to the lifecycle of the Deployment resource, use an additional step such as the `Run a kubectl script` or `Deploy Kubernetes YAML` step. ::: ### Service The `Service` feature creates a Service resource that directs traffic to the Pod resources configured by the `Deployment` section. Although the Deployment and Service resources are separate objects in Kubernetes, they are treated as a single deployment by the `Deploy Kubernetes Container` step, resulting in the Service resource always directing traffic to the Pod resources created by the associated Deployment resource. #### Service name Each Service resource requires a unique name, defined in the `Name` field. The Service resource name is not affected by the deployment strategy. #### Service type A Service resource can be one of three different types: * Cluster IP * Node Port * Load Balancer #### Cluster IP A Cluster IP Service resource provides a private IP address that applications deployed within the Kubernetes cluster can use to access other Pod resources. :::figure ![Cluster IP](/docs/deployments/kubernetes/cluster-ip.svg) ::: #### Node port A Node Port Service resource provides the same internal IP address that a Cluster IP Service resource does. In addition, it creates a port on each Kubernetes node that directs traffic to the Service resource. This makes the service accessible from any node, and if the nodes have public IP addresses then the Node Port Service resource is also publicly accessible. :::figure ![Node Port](/docs/deployments/kubernetes/node-port.svg) ::: #### Load balancer A Load Balancer Service resource provides the same Cluster IP and Node Ports that the other two service resources provide. In addition, it will create a cloud load balancer that directs traffic to the node ports. The particular load balancer that is created depends on the environment in which the LoadBalancer Service resource is created. In AWS, an ELB or ALB can be created. Azure or Google Cloud will create their respective load balancers. :::figure ![Load balancer](/docs/deployments/kubernetes/loadbalancer.svg) ::: #### Cluster IP address The `Cluster IP Address` field can be used to optionally assign a fixed internal IP address to the Service resource. #### Ports Each port exposed by the Service resource has four common fields: Name, Port, Target Port and Protocol. The `Name` field assigns an optional name to the port. This name can be used by Ingress resource objects. The `Port` field defines the internal port on the Service resource that internal applications can use. The `Target Port` field defines the name or number of the port exposed by a container. The `Protocol` field defines the protocol exposed by the port. It can be `TCP` or `UDP`. If the Service resource is a NodePort or LoadBalancer, then there is an additional optional `Node Port` field that defines the port exposed on the nodes that direct traffic to the Service resource. If not defined, a port number will be automatically assigned. :::figure ![Service ports](/docs/deployments/kubernetes/ports.svg) ::: ### Ingress The `Ingress` feature is used to create an Ingress resource. Ingress resources provide a way to direct HTTP traffic to Service resources based on the requested host and path. #### Ingress name Each Ingress resource must have a unique name, defined in the `Ingress name` field. The name of the ingress resource is not affected by the deployment strategy. #### Ingress class name [Starting with Kubernetes 1.18](https://oc.to/K8SIngressClassAnnouncement), the ingress controller that implements ingress rules is defined in the `Ingress Class Name` field. See the [Kubernetes documentation](https://oc.to/K8SIngressClassDocs) for more information. #### Ingress host rules Ingress resources configure routes based on the host that the request was sent to. New hosts can be added by clicking the `Add Host Rule` button. The `Host` field defines the host where the request was sent to. This field is optional and if left blank will match all hosts. The `Add Path` button adds a new mapping between a request path and the Service resource port. The `Path` field is the path of the request to match. It must start with a `/`. The `Service Port` field is the port from the associated Service resource that the traffic will be sent to. #### Ingress annotations Ingress resources only provide configuration. An Ingress Controller resource uses the Ingress configuration to direct network traffic within the Kubernetes cluster. There are many Ingress Controller resources available. [NGINX](https://oc.to/NginxIngressController) is a popular option, that is used by the [Azure AKS service](https://oc.to/AzureIngressController). Google Cloud provides its [own Ingress Controller resource](https://oc.to/GoogleCloudIngressController). A [third party Ingress Controller resource](https://oc.to/AwsIngressController) is available for AWS making use of the ALB service. The diagram below shows a typical configuration with Ingress and Ingress Controller resources. :::figure ![Ingress](/docs/deployments/kubernetes/ingress.svg) ::: :::div{.hint} There is no standard behavior to the creation of load balancers when configuring Ingress Controller resources. For example, the Google Cloud Ingress Controller will create a new load balancer for every Ingress resource. The [documentation](https://oc.to/GoogleCloudIngressFanOut) suggests to create a single Ingress resource to achieve a fan-out pattern that shares a single load balancer. This can be achieved using the [Configure and apply a Kubernetes Ingress](/docs/deployments/kubernetes/deploy-ingress) step. On the other hand, the [NGINX Ingress Controller resource installation procedure](https://oc.to/NginxIngressControllerDocs) creates a single LoadBalancer Service resource that is shared by default. ::: Each of these different implementations is configured through the Ingress resource annotations. Annotations are key value pairs, and the values assigned to them depend on the Ingress resource that is being configured. The list below links to the documentation that describes the supported annotations. * [NGINX](https://oc.to/NginxIngressControllerAnnotations) * [Google Cloud](https://oc.to/GoogleCloudIngressControllerGithub) * [AWS](https://oc.to/AwsAlbAnnotations) A new annotation is defined by clicking the `Add Annotation` button. The `Name` field will provide suggested annotation names, but this list of suggestions is not exhaustive, and any name can be added. The `Value` field defines the annotation value. :::div{.hint} Annotation values are always considered to be strings. See this [GitHub issue](https://oc.to/KubernetesAnnotationStringsIssue) for more information. ::: ### ConfigMap and secret It is often convenient to have settings saved in ConfigMap and Secret resources that are tightly coupled to the Deployment resource. Ensuring each version of a Deployment resource has its own ConfigMap or Secret resource means that deployments are not left in an inconsistent state as new Deployments resources are rolled out alongside existing Deployment resources, which is the case for both the Rolling Update and Blue/Green deployment strategies. The ConfigMap and Secret features are used to create ConfigMap and Secret resources that are created with the associated Deployment resource, and cleaned up once a Deployment resource has been replaced. Like the Custom Resource feature, the ConfigMap and Secret features achieve this by creating resources with unique names for each deployment. The resources have a set of labels applied that allows Octopus to manage them during a deployment. #### Custom ConfigMap and secret names By default, the ConfigMap and Secret resources created by this step have unique names generated by appending the ID of the deployment. For example, a ConfigMap may be defined in the step with the name of `my-app-settings`, and it will be created in the Kubernetes cluster with the name of `my-app-setting-deployment-1234`, where `deployment-1234` is the ID of the Octopus deployment as a lower case string. The templates used to generate these names can be defined with the following variables: * `Octopus.Action.KubernetesContainers.ConfigMapNameTemplate` * `Octopus.Action.KubernetesContainers.SecretNameTemplate` The values assigned to these variables will then be used to generate the names of the ConfigMap and Secret resources created by the step. By default, these are the templates that are used to generate the unique names: * `#{Octopus.Action.KubernetesContainers.ConfigMapName}-#{Octopus.Deployment.Id | ToLower}` * `#{Octopus.Action.KubernetesContainers.SecretName}-#{Octopus.Deployment.Id | ToLower}` For example, to change the name assigned to the ConfigMap resource to include the time of deployment instead of the deployment ID, you can set the `Octopus.Action.KubernetesContainers.ConfigMapNameTemplate` variable to `#{Octopus.Action.KubernetesContainers.ConfigMapName}-#{ | NowDate "HH-mm-ss-dd-MMM-yyyy" | ToLower}` ## Learn more - [Kubernetes blog posts](https://octopus.com/blog/tag/kubernetes/1) :::div{.hint} **Step updates** **2024.1:** - `Deploy Kubernetes containers` was renamed to `Configure and apply Kubernetes resources`. ::: # HA Cluster Support Source: https://octopus.com/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md ## Octopus Deploy HA Cluster Similarly to Polling Tentacles, the Kubernetes agent must have a URL for each individual node in the HA Cluster so that it receive commands from all clusters. These URLs must be provided when registering the agent or some deployments may fail depending on which node the tasks are executing. To read more about selecting the right URL for your nodes, see [Polling Tentacles and Kubernetes agents with HA](/docs/administration/high-availability/polling-tentacles-with-ha). ## Agent Installation on an HA Cluster ### Octopus Deploy 2024.3+ To make things easier, Octopus will detect when it's running HA and show an extra configuration page in the Kubernetes agent creation wizard which asks you to give a unique URL for each cluster node. :::figure ![Kubernetes Agent HA Cluster Configuration Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-ha-cluster-configuration-page.png) ::: Once these values are provided the generated helm upgrade command will configure your new agent to receive commands from all nodes. ### Octopus Deploy 2024.2 To install the agent with Octopus Deploy 2024.2 you need to adjust the Helm command produced by the wizard before running it. 1. Use the wizard to produce the Helm command to install the agent. 1. You may need to provide a ServerCommsAddress: you can just provide any valid URL to progress the wizard. 2. Replace the `--set agent.serverCommsAddress="..."` property with ```bash --set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" ``` where each `:` is a unique address for an individual node. 1. Execute the Helm command in a terminal connected to the target cluster. :::div{.warning} The new property name is `agent.serverCommsAddresses`. Note that "Addresses" is plural. ::: ## Upgrading the Agent after Adding/Removing Cluster nodes If you add or remove cluster nodes, you need to update your agent's configuration so that it continues to connect to all nodes in the cluster. To do this, you can simply run a helm upgrade command with the urls of all current cluster nodes. The agent will take remove any old urls and replace them with the provided ones. ```bash helm upgrade --atomic \ --reuse-values \ --set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" \ --namespace \ \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` ## Kubernetes Monitor :::div{.info} Support for running the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) with high availability Octopus clusters was added in v2025.4 ::: The Kubernetes monitor is able to avoid configuration for each individual Octopus server node. Instead, simply set up a single load balancer endpoint for gRPC and use that url. Refer to the [load balancer documentation](/docs/installation/load-balancers#grpc-services) for further information. # Custom prompts Source: https://octopus.com/docs/octopus-ai/assistant/custom-prompts.md Custom prompts allow you to tailor the Octopus AI Assistant to your organization's specific needs and business processes. Instead of relying solely on the AI's general knowledge, you can embed your internal documentation, troubleshooting procedures, and domain-specific guidance directly into the assistant's responses. ## Why use custom prompts? Custom prompts are particularly valuable for: - Platform teams providing self-service support to development teams with organization-specific guidance - Embedding internal documentation and troubleshooting procedures into AI responses - Standardizing responses across teams with consistent, approved solutions - Reducing support burden by providing context-aware, automated first-line support For example, instead of getting generic advice about deployment failures, a custom prompt can direct users to your internal runbooks, specific team contacts, or approved remediation procedures. ## How custom prompts work Custom prompts are defined using Library Variable Sets in Octopus Deploy and work alongside the AI's built-in knowledge. When a user interacts with the Octopus AI Assistant on a specific page, any custom prompts configured for that page will appear as suggested options. There are two types of custom prompt variables: - **Prompt variables** (`PageName[#].Prompt`) - The text displayed to users and sent to the AI - **System prompt variables** (`PageName[#].SystemPrompt`) - Optional additional instructions that guide the AI's response but aren't shown to users ## Adding custom prompts to Octopus To add custom prompts to your Octopus AI Assistant: 1. Open the Octopus Deploy web portal 2. On the main page for the space, click **Variable Sets** 3. Click **Add Variable Set** 4. Enter `OctoAI Prompts` for the variable set name 5. Add variables in the new variable set using the naming convention below ### Variable naming convention Variables must follow this format: - `PageName[#].Prompt` - The prompt displayed in the UI and passed to the LLM - `PageName[#].SystemPrompt` - Optional additional prompt instructions passed to the LLM but not shown in the UI Where: - `PageName` is one of the supported Octopus Deploy page names (see [Supported pages table](#supported-pages) below) - `#` is a number from 0 to 4 inclusive for up to 5 prompts per page For example: - `Project.Deployment[0].Prompt` - A prompt displayed in the Octopus AI Assistant when viewing a project deployment - `Project.Deployment[0].SystemPrompt` - The system prompt for that deployment prompt ## Writing custom prompts ### Basic prompt structure A basic prompt variable defines what users see and what gets sent to the AI. For example: | Variable name | Variable value | |----------|-------| | `Project.Deployment[0].Prompt` | Why did the deployment fail? If the deployment didn't fail, say so. Provide suggestions for resolving the issue. | This prompt relies on the AI's built-in knowledge and the deployment context (logs, process configuration, etc.) to provide an answer. ### Adding system prompts for business context System prompts allow you to embed your organization's specific knowledge and procedures. The system prompt guides the AI's response without being visible to users. For example: | Variable name | Variable value | |----------|-------| | `Project.Deployment[0].SystemPrompt` | If the logs indicate that a Docker image is missing, You must only provide the suggestion that the user must visit to get additional instructions to resolve missing Docker containers. You will be penalized for offering generic suggestions to resolve a missing Docker image. You will be penalized for offering script suggestions to resolve a missing Docker image. You will be penalized for suggesting step retries to resolve a missing Docker image. | This system prompt is sent to the LLM to provide specific instructions on how to respond to the request, and: - Detects a specific condition (missing Docker image) - Provides organization-specific guidance (internal documentation link) - Prevents generic responses that don't align with your procedures ## Supported pages The following table shows all the pages where custom prompts can be configured. Each page corresponds to a specific area of the Octopus web interface, allowing you to provide targeted assistance based on what users are currently viewing. | Page Name | Description | |-----------|-------------| | Dashboard | The main dashboard | | Tasks | The tasks overview | | Project | The project dashboard | | Project.Settings | The project settings | | Project.VersionControl | The project version control settings | | Project.ITSMProviders | The project ITSM settings | | Project.Channels | The project channels | | Project.Triggers | The project triggers | | Project.Process | The project deployment process editor | | Project.Step | An individual step in the deployment process editor | | Project.Variables | The project variables editor | | Project.AllVariables | The overview of all the project variables | | Project.PreviewVariables | The preview of all the project variables | | Project.VariableSets | The project library variable sets | | Project.TenantVariables | The project tenant variables | | Project.Operations | The project runbooks dashboard | | Project.Operations.Triggers | An runbook triggers | | Project.Deployment | The project deployments | | Project.Release | The project releases | | Project.Runbooks | The project runbooks | | Project.Runbook.Runbook | An individual runbook | | Project.Runbook.Run | A runbook run | | LibraryVariableSets | The library variable sets | | LibraryVariableSet.LibraryVariableSet | An individual library variable set | | Machines | The targets dashboard | | Machine.Machine | An individual target | | Accounts | The accounts dashboard | | Account.Account | An individual account | | Workers | The workers dashboard | | WorkerPools | The workerpool dashboard | | MachinePolicies | An machine policies dashboard | | MachineProxies | An machine proxies dashboard | | Feeds | The feeds dashboard | | GitCredentials | The git credentials dashboard | | GitConnections | The GitHub App dashboard | | Lifecycles | The lifecycles dashboard | | Packages | The built-in feed dashboard | | ScriptModules | The script modules dashboard | | StepTemplates | The step templates dashboard | | TagSets | The tag sets dashboard | | TagSets.TagSet | An individual tag set | | Tenants | The tenants dashboard | | Tenant.Tenant | An individual tenant | | Certificates | The certificates dashboard | | Environments | The environments dashboard | | Environment.Environment | An individual environment | | Infrastructure | The infrastructure dashboard | | BuildInformation | The build information dashboard | # Dynamic workers Source: https://octopus.com/docs/octopus-cloud/dynamic-worker.md [Workers](/docs/infrastructure/workers) are machines that can execute tasks that don’t need to be run on the Octopus Server or individual deployment targets. There are 2 types of worker you can use in Octopus Cloud - external workers and dynamic workers. The most flexible type of worker are [external workers](/docs/infrastructure/workers#external-workers), which are machines, provided by the customer, accessed from Octopus Cloud via Windows or Linux Tentacle, via SSH, or via [Kubernetes workers](/docs/infrastructure/workers/kubernetes-worker). External workers are recommended when the customer needs full control of - worker resourcing - worker configuration - worker life-cycle - installed software - the number of workers that are available Our larger customers often prefer external workers for these reasons. We recommend customers consider [Kubernetes workers](https://octopus.com/blog/kubernetes-worker) particularly where workloads require flexible scalability of compute resources. The other type of worker available on Octopus Cloud are dynamic workers. :::div{.hint} Self-hosted Octopus Server customers have access to a third type of worker, known as built-in workers. Built-in workers are processes that run on the same machine as Octopus Server. Built-in workers are **not** available to Octopus Cloud customers for security and performance reasons. ::: ## What you get with dynamic workers Dynamic workers are isolated virtual machines, hosted and created on-demand by Octopus to run your deployments and runbook steps. Dynamic workers are provided as part of your Octopus Cloud subscription. Customers may choose between Windows and Ubuntu virtual machine images for their dynamic workers. Octopus provides a [dynamic worker pool](/docs/infrastructure/workers/dynamic-worker-pools) of these virtual machine image types from which, as required, your Octopus Cloud will lease a freshly provisioned dynamic worker VM. Leases are held for a maximum of 72 hours. Customers can lease one dynamic worker VM from each pool concurrently. ## Limitations of dynamic workers ### Resourcing Your Octopus Cloud [task cap](/docs/octopus-cloud/task-cap) determines the resources available to your dynamic worker. As at January 2025, dynamic worker virtual machines are resourced as follows. These specifications may be adjusted over time. | Task cap | vCPUs (Qty.) | Memory (GB) | | -----: | ------: | ------: | | 5 | 2 | 4 | | 10 | 4 | 8 | | 20 | 4 | 8 | | 40 | 8 | 16 | | 80 | 16 | 32 | | 160 | 32 | 64 | :::div{.hint} We recommend customers who would benefit from scalable workers consider [Kubernetes workers](/docs/infrastructure/workers/kubernetes-worker) over dynamic workers. Kubernetes workers allow worker operations to be executed within a Kubernetes cluster in a scalable manner, allowing compute resources used during the execution of a Deployment process (or runbook) to be released when the Deployment completes. ::: ### Life-cycle Dynamic workers are created on demand and leased to an Octopus Cloud instance for a limited time [before being destroyed](/docs/infrastructure/workers/dynamic-worker-pools#on-demand). Dynamic workers are destroyed when they have been idle for 60 minutes or when they reached 72 hours of existence. All data written to disk is lost upon worker destruction. ### Installed software Dynamic workers come with a small number of [baseline tools](/docs/infrastructure/workers/dynamic-worker-pools#available-dynamic-worker-images) installed. The version of baseline tools may be updated between worker leases. We do not recommend installing additional software on dynamic workers. Instead, we suggest you leverage [execution containers for workers](/docs/projects/steps/execution-containers-for-workers). Octopus provides execution containers with a baseline of tools pre-installed. Customers with specific software needs may also use [custom Docker images](/docs/projects/steps/execution-containers-for-workers/#custom-docker-images) to use as execution containers. ### IP addresses Dynamic workers are assigned IP addresses outside the static IP range of your Octopus Cloud instance. If a known/static IP is required for your worker, please consider provisioning your own external worker. ## Let us know what you want for the future of dynamic workers We are interested in hearing from customers who would value a higher specification of dynamic worker. Perhaps more highly resourced dynamic workers would be helpful, or workers with additional security features. If this interests you, please vote on our [Higher resourced, more secure dynamic workers](https://roadmap.octopus.com/c/189-higher-resourced-more-secure-dynamic-workers-for-octopus-cloud?&utm_medium=social&utm_source=starter_share) roadmap item and share with us how dynamic workers can better assist your deployment success. ## Learn more - [Dynamic worker pools](/docs/infrastructure/workers/dynamic-worker-pools) - [Execution containers](/docs/projects/steps/execution-containers-for-workers) # octopus deployment-target cloud-region view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-cloud-region-view.md View a Cloud Region deployment target in Octopus Deploy ```text Usage: octopus deployment-target cloud-region view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target cloud-region view 'EU' octopus deployment-target cloud-region view Machines-100 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Deployment process Source: https://octopus.com/docs/octopus-rest-api/examples/deployment-process.md You can use the REST API to manage a project's [deployment process](/docs/projects/deployment-process). Typical tasks might include: # Using in an Octopus Step Source: https://octopus.com/docs/octopus-rest-api/octopus.client/using-client-in-octopus.md You can use Octopus.Client from inside Octopus (for example in a script step or a package install script) by referencing it as a package. You can configure [nuget.org](https://api.nuget.org/v3/index.json) as an [External Feed](/docs/packaging-applications/package-repositories/nuget-feeds) that provides this package. Octopus will automatically extract this package for you, allowing your script to reference the .dll file it contains using a relative path. For example:
      PowerShell ```powershell Add-Type -Path 'Octopus.Client/lib/netstandard2.0/Octopus.Client.dll' ```
      C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; ```
      The credentials would still need to be supplied to establish the connection. ## Using Octopus.Client from installation folder {#using-octopus-client-from-install-folder} Octopus Server and Tentacle both ship with a version of `Octopus.Client.dll` in the installation directory. Avoid using this in your scripts as this is considered an implementation detail of those products. As such it is subject to change at any time, and not guaranteed to work with your version of Octopus Server. # Octopus.Server.exe command line Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line.md **Octopus.Server.exe** is the executable that runs the Octopus Server instance. It includes many helpful commands that allow you to manage the instance, including; authentication, configuration, diagnostics and running the service. ## Commands {#octopus.server.exeCommandLine-Commands} `octopus.server.exe` supports the following commands: - **[admin](/docs/octopus-rest-api/octopus.server.exe-command-line/admin)**: Reset admin user passwords, re-enable them, and ensure they are in the admin group. - **[builtin-worker](/docs/octopus-rest-api/octopus.server.exe-command-line/builtin-worker)**: Configure the built-in worker used to run deployment actions and scripts on the Octopus Server. - **[checkservices](/docs/octopus-rest-api/octopus.server.exe-command-line/checkservices)**: Checks the Octopus instances are running. - **[configure](/docs/octopus-rest-api/octopus.server.exe-command-line/configure)**: Configure this Octopus instance. - **[create-instance](/docs/octopus-rest-api/octopus.server.exe-command-line/create-instance)**: Registers a new instance of the Octopus service. - **[database](/docs/octopus-rest-api/octopus.server.exe-command-line/database)**: Create, drop or configure the Octopus database. - **[delete-instance](/docs/octopus-rest-api/octopus.server.exe-command-line/delete-instance)**: Deletes an instance of the Octopus service. - **[export-certificate](/docs/octopus-rest-api/octopus.server.exe-command-line/export-certificate)**: Exports the certificate that Octopus Server can use to authenticate itself with its Tentacles. - **[import-certificate](/docs/octopus-rest-api/octopus.server.exe-command-line/import-certificate)**: Replace the certificate that Octopus Server uses to authenticate itself with its Tentacles. - **[license](/docs/octopus-rest-api/octopus.server.exe-command-line/license)**: Import a license key. - **[list-instances](/docs/octopus-rest-api/octopus.server.exe-command-line/list-instances)**: Lists all installed Octopus instances. - **[lost-master-key](/docs/octopus-rest-api/octopus.server.exe-command-line/lost-master-key)**: Get your Octopus Server working again after losing your Master Key. - **[new-certificate](/docs/octopus-rest-api/octopus.server.exe-command-line/new-certificate)**: Creates a new certificate that Octopus Server can use to authenticate itself with its Tentacles. - **[node](/docs/octopus-rest-api/octopus.server.exe-command-line/node)**: Configure settings related to this Octopus Server node. - **[path](/docs/octopus-rest-api/octopus.server.exe-command-line/path)**: Set the file paths that Octopus will use for storage. - **[proxy](/docs/octopus-rest-api/octopus.server.exe-command-line/proxy)**: Configure the HTTP proxy used by Octopus. - **[rotate-master-key](/docs/octopus-rest-api/octopus.server.exe-command-line/rotate-master-key)**: Rotate the Master Key on your Octopus Server and re-encrypt all sensitive data. - **[run](/docs/octopus-rest-api/octopus.server.exe-command-line/run)**: Starts the Octopus Server in debug mode. - **[service](/docs/octopus-rest-api/octopus.server.exe-command-line/service)**: Start, stop, install and configure the Octopus service. - **[set-master-key](/docs/octopus-rest-api/octopus.server.exe-command-line/set-master-key)**: Set the Master Key on your Octopus Server after rotating the database. - **[show-configuration](/docs/octopus-rest-api/octopus.server.exe-command-line/show-configuration)**: Outputs the server configuration. - **[show-master-key](/docs/octopus-rest-api/octopus.server.exe-command-line/show-master-key)**: Print the server's Master Encryption Key, so that it can be backed up. - **[show-thumbprint](/docs/octopus-rest-api/octopus.server.exe-command-line/show-thumbprint)**: Shows the squid and thumbprint of the server instance. - **[ssl-certificate](/docs/octopus-rest-api/octopus.server.exe-command-line/ssl-certificate)**: Binds the SSL/TLS certificate used by the portal to the specified address/port. - **[version](/docs/octopus-rest-api/octopus.server.exe-command-line/version)**: Show the Octopus Server version information. - **[watchdog](/docs/octopus-rest-api/octopus.server.exe-command-line/watchdog)**: Configure a scheduled task to monitor the Octopus service(s). ## General usage {#Octopus.Server.exeCommandLine-Generalusage} All commands take the form of: ```powershell Octopus.Server [] ``` To get help for a specific command use: ```powershell Octopus.Server --help ``` # Export certificates Source: https://octopus.com/docs/octopus-rest-api/octopus.server.exe-command-line/export-certificate.md Use the export certificate command to backup the certificate that Octopus Server uses to authenticate itself with its Tentacles. ## Export certificate options ```bash Usage: Octopus.Server export-certificate [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --export-pfx=VALUE The filename to which to export the certificate --pfx-password=VALUE The password to use for the exported pfx file --type=VALUE Sets which certificate will be exported. Valid options are: 'tentacle' or 'grpc'. Default: 'tentacle' Or one of the common options: --help Show detailed help for this command ``` :::div{.hint} The `--type` parameter is only available in versions `>= 2025.4` ::: ## Basic examples ### Exporting Tentacle certificate This example exports the certificate that the Octopus Server instance named `OctopusServer` uses to authenticate itself with its [Tentacles](/docs/infrastructure/deployment-targets/tentacle/windows): ```bash octopus.server export-certificate --instance="OctopusServer" --export-pfx="C:\temp\OctopusServer-certificate.pfx" --pfx-password="your-secret-password" ``` ### Exporting gRPC certificate This example exports the certificate that the Octopus Server instance named `OctopusServer` uses to authenticate itself with its [Kubernetes Monitors](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) and [Argo CD Gateways](/docs/argo-cd/instances): ```bash octopus.server export-certificate --instance="OctopusServer" --export-pfx="C:\temp\OctopusServer-certificate.pfx" --pfx-password="your-secret-password" --type="grpc" ``` # Codefresh Pipelines Source: https://octopus.com/docs/packaging-applications/build-servers/codefresh-pipelines.md Codefresh is a docker-native CI/CD platform [Codefresh Pipelines](https://codefresh.io/docs/docs/pipelines/introduction-to-codefresh-pipelines/) are workflows that form Codefresh's continuous integration (CI) platform. # Integrating with Codefresh Pipelines Codefresh pipelines allow you to customize steps to create, deploy and promote releases to your Octopus Deploy [environments](/docs/infrastructure/environments/). The steps do this by running the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) inside a Docker container. Octopus Deploy has several custom pipeline steps available: - [Log into Octopus](https://codefresh.io/steps/step/octopusdeploy-login) - [Create a package](https://codefresh.io/steps/step/octopusdeploy-create-package) - [Push a package](https://codefresh.io/steps/step/octopusdeploy-push-package) - [Create a release](https://codefresh.io/steps/step/octopusdeploy-create-release) - [Deploy a release](https://codefresh.io/steps/step/octopusdeploy-deploy-release) - [Deploy a tenanted release](https://codefresh.io/steps/step/octopusdeploy%2Fdeploy-release-tenanted) - [Run a runbook](https://codefresh.io/steps/step/octopusdeploy-run-runbook) - [Push build information](https://codefresh.io/steps/step/octopusdeploy-push-build-information) ## Codefresh Pipeline step configuration When creating your first Codefresh Pipeline, the pipeline workflow can be defined in the Codefresh UI or within a git-based repository. The workflow yaml defines the steps to run and any arguments required to run each step. The details of an Octopus instance are required to run all Octopus Codefresh steps: | Variable name | Description | |------------------------|---------------------------------------------------------------------------------------------------------------------| | `OCTOPUS_URL` | The Octopus Server URL you wish to run your steps on | | `OCTOPUS_API_KEY` | The Octopus Deploy API Key required for authentication | | `OCTOPUS_ACCESS_TOKEN` | This value is set by the **octopusdeploy-login** step, and should be passed as an argument to all following steps | | `AUDIENCE` | The Octopus Deploy audience or service account ID required for authentication | | `OCTOPUS_SPACE` | The Space to run steps on | ### Authentication to Octopus server The following steps require Octopus server authentication: - [Push a package](https://codefresh.io/steps/step/octopusdeploy-push-package) - [Create a release](https://codefresh.io/steps/step/octopusdeploy-create-release) - [Deploy a release](https://codefresh.io/steps/step/octopusdeploy-deploy-release) - [Deploy a tenanted release](https://codefresh.io/steps/step/octopusdeploy%2Fdeploy-release-tenanted) - [Run a runbook](https://codefresh.io/steps/step/octopusdeploy-run-runbook) - [Push build information](https://codefresh.io/steps/step/octopusdeploy-push-build-information) There are two options for authentication. You can: 1. Use the [Log into Octopus step](https://codefresh.io/steps/step/octopusdeploy-login) and provide `OCTOPUS_ACCESS_TOKEN` as an argument for each step. 2. Skip the login step and provide an `OCTOPUS_API_KEY` as an argument for each step. ## Codefresh variables It is recommended to use Codefresh variables to set the `OCTOPUS_URL` and an encrypted variable to set the `AUDIENCE`. This way, you can simply insert the variable for all octopus deploy steps in your workflow. These can be set by clicking **Add Variable** from the **Variable** menu of your Codefresh Pipeline. Enter your variable name and value. To insert the variable in your workflow, use the Codefresh variable syntax `${{YOUR_VARIABLE_NAME}}` :::figure ![Use variables in your Codefresh workflow](/docs/img/packaging-applications/build-servers/codefresh-pipelines/codefresh-variables.png) ::: For more details on Codefresh pipeline variables, see the Codefresh documentation on [Variables in pipelines](https://codefresh.io/docs/docs/pipelines/variables/). ## Codefresh encrypted variables To store sensitive information such as Octopus Deploy API keys, you can use Codefresh's encrypted variables in your workflow. To encrypt the variable, click on the lock next to the variable value. :::figure ![Encrypt variables in your Codefresh workflow](/docs/img/packaging-applications/build-servers/codefresh-pipelines/codefresh-variables-encrypt.png) ::: ## Triggering a build A build can be triggered in a few different ways such as: - Push commits - Pull requests - On-demand And others depending on your git provider. Please see the [Codefresh documentation on supported git trigger events](https://codefresh.io/docs/docs/pipelines/triggers/git-triggers/). For details on how to configure git triggers, see the [Codefresh documentation on Git Triggers](https://codefresh.io/docs/docs/pipelines/triggers/git-triggers/). # Codefresh Pipeline stages Codefresh Pipelines are workflows that consist of ***steps***. By default, the Codefresh execution engine will execute sequentially from the first defined step in the `codefresh.yml` file. To [configure parallel steps in your pipeline](https://codefresh.io/docs/docs/pipelines/advanced-workflows/), see the Codefresh documentation for more details. Before defining the steps in your workflow, you can configure in ***stages***. You can then assign a stage for each of the steps in your pipeline. Stages are groups used to define how the steps will be visualized in the UI, and have no effect on the execution of the steps. ```yaml version: "1.0" stages: - "Deploy project" - "Run the runbook" steps: create-release: type: octopusdeploy-create-release stage: "Deploy project" arguments: ... deploy: type: octopusdeploy-deploy-release stage: "Deploy project" arguments: ... run-runbook: type: octopusdeploy-run-runbook stage: "Run the runbook" arguments: ... ``` # Example Pipeline builds The following examples demonstrate a Codefresh Pipeline build of an application sourced from Github. ## When using the Login step To build and deploy this application, you'll need the following steps: - Clone the source code - Obtain OIDC token (available from the [Codefresh Marketplace](https://codefresh.io/steps/)) - Login - Create a package - Push package to Octopus Deploy instance - Create a release for an existing project (get started with the basics of [setting up a project](/docs/projects/setting-up-projects)) - Deploy Below is an example Codefresh Pipeline workflow which includes these steps:
      Click here to view the entire example build YAML ```yaml version: "1.0" stages: - "build and push" - "deploy" steps: clone: title: "Cloning repository" type: "git-clone" stage: "build and push" repo: <> revision: "main" working_directory: "/codefresh/volume" credentials: username: ${{GITHUB_USERNAME}} password: ${{GITHUB_PASSWORD}} obtain_id_token: title: Obtain ID Token type: obtain-oidc-id-token stage: "Login" login: type: octopusdeploy-login title: Login stage: "login" arguments: # ID_TOKEN is set as an environment variable by the obtain_id_token step ID_TOKEN: '${{ID_TOKEN}}' OCTOPUS_URL: "https://example.octopustest.app/" OCTOPUS_SERVICE_ACCOUNT_ID: <> create-package: title: "Create package" type: octopusdeploy-create-package stage: "build and push" arguments: ID: "Hello" VERSION: "1.0.0-${{CF_BUILD_ID}}" BASE_PATH: "/codefresh/volume" OUT_FOLDER: "/codefresh/volume" push-package: title: "Push package" type: octopusdeploy-push-package stage: "build and push" arguments: # OCTOPUS_ACCESS_TOKEN is set as an environment variable by the octopusdeploy-login step OCTOPUS_ACCESS_TOKEN: ${{OCTOPUS_ACCESS_TOKEN}} OCTOPUS_URL: ${{OCTOPUS_URL}} OCTOPUS_SPACE: "Spaces-42" PACKAGES: - "/codefresh/volume/Hello.1.0.0-${{CF_BUILD_ID}}.zip" OVERWRITE_MODE: 'overwrite' create-release: type: octopusdeploy-create-release title: "Create release" stage: "deploy" arguments: OCTOPUS_ACCESS_TOKEN: ${{OCTOPUS_ACCESS_TOKEN}} OCTOPUS_URL: ${{OCTOPUS_URL}} OCTOPUS_SPACE: "Spaces-42" PROJECT: "Demo Project" RELEASE_NUMBER: "1.0.0-${{CF_BUILD_ID}}" PACKAGES: - "Hello:1.0.0-${{CF_BUILD_ID}}" RELEASE_NOTES: This is a release note deploy: type: octopusdeploy-deploy-release title: "Deploy release" stage: "deploy" arguments: OCTOPUS_ACCESS_TOKEN: ${{OCTOPUS_ACCESS_TOKEN}} OCTOPUS_URL: ${{OCTOPUS_URL}} OCTOPUS_SPACE: "Spaces-42" PROJECT: "Demo Project" RELEASE_NUMBER: "1.0.0-${{CF_BUILD_ID}}" ENVIRONMENTS: - "Development" ```
      ## When using an API key To build and deploy this application, you'll need the following steps: - Clone the source code - Create a package - Push package to Octopus Deploy instance - Create a release for an existing project (get started with the basics of [setting up a project](/docs/projects/setting-up-projects)) - Deploy Below is an example Codefresh Pipeline workflow which includes these steps:
      Click here to view the entire example build YAML ```yaml version: "1.0" stages: - "build and push" - "deploy" steps: clone: title: "Cloning repository" type: "git-clone" stage: "build and push" repo: <> revision: "main" working_directory: "/codefresh/volume" credentials: username: ${{GITHUB_USERNAME}} password: ${{GITHUB_PASSWORD}} create-package: title: "Create package" type: octopusdeploy-create-package stage: "build and push" arguments: ID: "Hello" VERSION: "1.0.0-${{CF_BUILD_ID}}" BASE_PATH: "/codefresh/volume" OUT_FOLDER: "/codefresh/volume" push-package: title: "Push package" type: octopusdeploy-push-package stage: "build and push" arguments: OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: ${{OCTOPUS_URL}} OCTOPUS_SPACE: "Spaces-42" PACKAGES: - "/codefresh/volume/Hello.1.0.0-${{CF_BUILD_ID}}.zip" OVERWRITE_MODE: 'overwrite' create-release: type: octopusdeploy-create-release title: "Create release" stage: "deploy" arguments: OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: ${{OCTOPUS_URL}} OCTOPUS_SPACE: "Spaces-42" PROJECT: "Demo Project" RELEASE_NUMBER: "1.0.0-${{CF_BUILD_ID}}" PACKAGES: - "Hello:1.0.0-${{CF_BUILD_ID}}" RELEASE_NOTES: This is a release note deploy: type: octopusdeploy-deploy-release title: "Deploy release" stage: "deploy" arguments: OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: ${{OCTOPUS_URL}} OCTOPUS_SPACE: "Spaces-42" PROJECT: "Demo Project" RELEASE_NUMBER: "1.0.0-${{CF_BUILD_ID}}" ENVIRONMENTS: - "Development" ```
      # Octopus Deploy steps Octopus Deploy steps and examples are available from the [Codefresh Marketplace](https://codefresh.io/steps/). Each step includes one or two examples to help with setting up a workflow. Basic examples include only required arguments, and complex examples include both required and optional arguments. ## Log into Octopus The **octopusdeploy-login** step authenticates to Octopus via OIDC, so your Octopus server needs a [service account with OIDC enabled](/docs/octopus-rest-api/openid-connect/other-issuers). To allow connections from Codefresh, the service account's OIDC identity should have **Issuer** `https://oidc.codefresh.io` and a **Subject** matching the [Codefresh subject claim for your preferred pipeline trigger](https://codefresh.io/docs/docs/integrations/oidc-pipelines/#codefresh-trigger-types-for-subject-claims). The **octopusdeploy-login** step requires an `ID_TOKEN`, which can be generated by running the Codefresh **obtain-oidc-id-token** Marketplace step. This step sets the token as an environment variable which can be passed into the Octopus login step as an argument. See the [Codefresh OIDC documentation](https://codefresh.io/docs/docs/integrations/oidc-pipelines/) for further details. ```yaml login: type: octopusdeploy-login arguments: ID_TOKEN: '${{ID_TOKEN}}' OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SERVICE_ACCOUNT_ID: '${{OCTOPUS_SERVICE_ACCOUNT_ID}}' ``` This step returns `OCTOPUS_ACCESS_TOKEN` as a string, which should be passed into subsequent steps to authenticate. ## Package artifacts Create zip packages of your deployment artifacts by using the **octopusdeploy-create-package** step. Specify the files to include in each package, the location of those files and the details of the artifact to create. The following step packages all `.txt` files in the `/codefresh/volume` directory into the zip file `/codefresh/volume/Fresh.1.0.0.zip`: ```yaml create-package: title: "Create package" type: octopusdeploy-create-package arguments: ID: "Fresh" VERSION: "1.0.0" BASE_PATH: "/codefresh/volume" OUT_FOLDER: "/codefresh/volume" INCLUDE: - "*.txt" ``` This step returns a json object with property `Path`. ## Push packages to Octopus Server Once the artifacts are packaged, use the **octopusdeploy-push-package** step to push the packages to the Octopus Server built-in repository: ```yaml push-package: type: octopusdeploy-push-package arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: "Default" PACKAGES: - "/codefresh/volume/Fresh.1.0.0.zip" ``` This step has no output. ## Create a release To create a release, use the **octopusdeploy-create-release** step. Provide the details for your Octopus instance, and the project you would like to create a release for: ```yaml create-release: type: octopusdeploy-create-release arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: "Default" PROJECT: "Project Name" ``` Optional arguments help to customize the creation of the release. You can specify version control details, select packages and provide release notes: ```yaml create-release: type: octopusdeploy-create-release arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: "Default" PROJECT: "Project Name" RELEASE_NUMBER: "1.0.0-hotfix1" CHANNEL: "Hotfix" GIT_REF: "refs/heads/main" PACKAGES: - "Sample:1.0.0-hotfix1" RELEASE_NOTES: This is a release note ``` This returns a json object with properties `Channel` and `Version` for the release that was created. ## Deploy a release To deploy a release, use the **octopusdeploy-deploy-release** step. Provide details for your Octopus instance, and the project and release you want to deploy: ```yaml deploy-release: type: octopusdeploy-deploy-release arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: "Default" PROJECT: "Project Name" RELEASE_NUMBER: "0.0.1" ENVIRONMENTS: - "Development" ``` Additionally, you can provide optional arguments to specify guided failure mode and variables: ```yaml deploy-release: type: octopusdeploy-deploy-release arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: "Default" PROJECT: "Project Name" RELEASE_NUMBER: "0.0.1" ENVIRONMENTS: - "Development" VARIABLES: - "Greeting:Hello" USE_GUIDED_FAILURE: "false" ``` This returns a json array of created deployments, with properties `DeploymentId` and `ServerTaskId`. ## Deploy a tenanted release To deploy a tenanted release, use the **octopusdeploy-deploy-release-tenanted** step. Provide the details for your Octopus instance, and the tenants you want to deploy to. You will need to provide either tenants or tenant tags. To deploy an untenanted release, use the **octopusdeploy-deploy-release** step. ```yaml deploy-release-tenanted: type: octopusdeploy-deploy-release-tenanted arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: Spaces 1 PROJECT: Project Name RELEASE_NUMBER: 5.0.0 ENVIRONMENT: Development TENANTS: - Tenant 1 ``` Optional arguments help to customize the deployment of the release. You can specify prompted variable values, tenants, tenant tags, and guided failure mode. ```yaml deploy-release-tenanted: type: octopusdeploy-deploy-release-tenanted arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: Spaces 1 PROJECT: Project Name RELEASE_NUMBER: 5.0.0 ENVIRONMENT: Development VARIABLES: - 'LabelA:ValueA' TENANT_TAGS: - tagSetA/someTagB - tagSetC/someTagD USE_GUIDED_FAILURE: false ``` This returns a json array of created deployments, with properties `DeploymentId` and `ServerTaskId`. ## Run a runbook To run a runbook, use the **octopusdeploy-run-runbook** step. Provide the name of the runbook that you want to run, as well as the project and environment name(s). ```yaml run-runbook: type: octopusdeploy-run-runbook arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: Spaces 1 PROJECT: Project Name NAME: Runbook Name ENVIRONMENTS: - Development - Production ``` Optional arguments include variables to use within the runbook, the option to run for specific tenants or tenant tags, as well as the option to use guided failure mode. ```yaml run-runbook: type: octopusdeploy-run-runbook arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: Spaces 1 PROJECT: Project Name NAME: Runbook Name ENVIRONMENTS: - Development - Production VARIABLES: - 'Label:Value' TENANTS: - Tenant 1 TENANT_TAGS: - Tenant tag 1 USE_GUIDED_FAILURE: 'false' ``` This returns a json array of created runbook runs, with properties `RunbookRunId` and `ServerTaskId`. ## Push build information To push build information for a project, use the **octopusdeploy-push-build-information** step. Provide a list of packages that need build information, a build information json file and a version number. By default, the step will fail if build information already exists, but this can be configured using the `OVERWRITE_MODE` option (`fail`, `overwrite`, or `ignore`). ```yaml push-build-information: type: octopusdeploy-push-build-information arguments: OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' # Option to replace with OCTOPUS_API_KEY: ${{OCTOPUS_API_KEY}} OCTOPUS_URL: '${{OCTOPUS_URL}}' OCTOPUS_SPACE: Spaces 1 PACKAGE_IDS: - SomePackage - SomeOtherPackage FILE: SomeFile.json VERSION: 1.0.0 OVERWRITE_MODE: fail ``` Sample build information json file: ```json { "BuildEnvironment": "BitBucket", "Branch": "main", "BuildNumber": "288", "BuildUrl": "https://bitbucket.org/octopussamples/petclinic/addon/pipelines/home#!/results/288", "VcsType": "Git", "VcsRoot": "http://bitbucket.org/octopussamples/petclinic", "VcsCommitNumber": "12345", "Commits": [ { "Id": "12345", "Comment": "Sample commit message" } ] } ``` This step has no output. # Error handling Codefresh provides inbuilt error handling for all steps. Retry of failed steps is enabled using the `retry` settings. See the [Codefresh documentation on retrying a step](https://codefresh.io/docs/docs/pipelines/what-is-the-codefresh-yaml/#retrying-a-step) for more details. ```yaml version: "1.0" stages: - "Login" - "Deploy project" steps: obtain_id_token: title: Obtain ID Token type: obtain-oidc-id-token stage: "Login" login: type: octopusdeploy-login title: Login stage: "login" arguments: # ID_TOKEN is set as an environment variable by the obtain_id_token step ID_TOKEN: '${{ID_TOKEN}}' OCTOPUS_URL: "https://example.octopustest.app/" OCTOPUS_SERVICE_ACCOUNT_ID: <> deploy: type: octopusdeploy-deploy-release stage: "Deploy project" retry: maxAttempts: 5 delay: 5 exponentialFactor: 2 arguments: # OCTOPUS_ACCESS_TOKEN is set as an environment variable by the octopusdeploy/login step OCTOPUS_ACCESS_TOKEN: '${{OCTOPUS_ACCESS_TOKEN}}' OCTOPUS_URL: "https://example.octopustest.app/" OCTOPUS_SPACE: "Spaces-1" PROJECT: "Create Release Test" RELEASE_NUMBER: "1.0.2" ENVIRONMENTS: - "Development" ``` # Continua CI Source: https://octopus.com/docs/packaging-applications/build-servers/continua-ci.md [Continua CI](http://www.finalbuilder.com/continua-ci) is a continuous integration server from the makers of FinalBuilder. Version 1.5 adds special support for Octopus Deploy. :::figure ![](/docs/img/packaging-applications/build-servers/images/3278149.png) ::: Learn more about [integrating Continua CI with Octopus Deploy](http://www.finalbuilder.com/resources/blogs/postid/712/deployment-with-continua-ci-and-octopus-deploy). # Docker Hub Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries/docker-hub.md The default Docker Registry, which is maintained by the Docker organization, is the cloud-hosted [Docker Hub Registry](https://hub.docker.com/). This is the Registry which is used by Docker engine when it is first installed and you call `docker search`. From September 5th 2022, the Docker Hub Registry is [deprecating v1 endpoints](https://www.docker.com/blog/docker-hub-v1-api-deprecation) to retrieve tags and images. The equivalent v2 endpoints require authentication. Therefore, external feeds will require a username and password to access the Docker Hub API. Searching for repositories of a non-official repository will also require you to provide your Docker Hub username and password. Searching for official public repositories does not require credentials. :::div{.problem} **DockerHub Private Repository Limitations** By design, Docker Hub **does not support** [searching for private repositories](https://docs.docker.com/docker-hub/#/explore-repositories), even with valid credentials. Additionally, while you will be able to search for a non-official repository, Docker Hub *will not return any tags for unofficial images*. If you are using an unofficial image, you will be able to select this when configuring your run step, but you will need to manually enter the version that you wish to deploy. So long as it exists in the registry, your Docker Engine will be able to pull it down. The Docker Hub API endpoint `https://index.docker.io/v1` provides access to repositories with different levels of access | Repository | Shows In Search | Lists Tags | | --- | --- | --- | | Public + Official | Yes | Yes | | Public + Unofficial | Yes | No | | Private | No | No | We suggest using alternative registry when trying to manage your own private images. See here for more details on hosting your own [Private Registry](/docs/packaging-applications/package-repositories/docker-registries/#private-registry). ::: ## Adding Docker Hub as an Octopus External Feed To use the Docker Hub registry in Octopus Deploy, create an external feed with the following settings: - **Feed Type:** Docker Container Registry - **Name:** DockerHub (or anything else that makes sense to you) - **URL:** `https://index.docker.io` - **Registry Path:** *leave blank* - **Credentials:** Username and Password (Login for your DockerHub account, this is required for accessing public repositories) ![Docker Hub Registry Feed](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/dockerhub-feed.png) # Nexus Hosted NuGet repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories/nexus-nuget-feed.md Both Nexus OSS and Nexus Pro offer three types of NuGet repository, Hosted, Group, and Proxy. This guide will cover creating a Hosted NuGet repository and adding it as an External Feed in Octopus Deploy. :::div{.info} This guide was written using Nexus OSS version 3.37.0-01 ::: ## Configuring a Hosted NuGet repository From the Nexus web portal, click on the **gear icon** to get to the **Administration** screen. :::figure ![Administration gear Icon](/docs/img/packaging-applications/package-repositories/guides/images/nexus-nuget-administration.png) ::: Click on **Repositories** :::figure ![Repositories](/docs/img/packaging-applications/package-repositories/guides/images/nexus-repositories.png) ::: Click **Create repository** :::figure ![Create repository](/docs/img/packaging-applications/package-repositories/guides/images/nexus-create-repository.png) ::: Choose **nuget (hosted)** from the list of repositories to create :::figure ![NuGet (hosted)](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/nexus-nuget-repository.png) ::: Give the repository a name and change any applicable configuration options. Click **Create repository** when you are done. :::figure ![Create repository](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/nexus-create-nuget-repository.png) ::: When the repository has been created, click on the entry in the list to bring up the repository properties. :::figure ![MyNexusNugetRepo](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/nexus-mynexusnugetrepo.png) ::: Copy the URL property, that is what you will use when adding it as an external feed :::figure ![Repository URL](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/nexus-nuget-url.png) ::: Optionally upload a NuGet package to the repository so you can verify search functionality when added as an external feed. ## Adding an Nexus NuGet repository as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `NuGet Feed` Feed type. Give the feed a name and in the URL field, paste the URL you copied earlier. It should look similar to this format: `https://your.nexus.url/repository/[repository name]` ![Nexus NuGet feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/nexus-nuget-feed.png) # NuGet feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/nuget-feeds.md If you're using an external NuGet feed, you can register it with Octopus and use them as part of your deployments. Go to **Deploy ➜ Manage ➜ External Feeds**. You can add NuGet feeds by clicking the **Add feed** button. In the URL field, enter the HTTP/HTTPS URL to the feed, or the file share or local directory path. Then click **Save and test**. :::figure ![](/docs/img/packaging-applications/package-repositories/images/add-external-feed.png) ::: :::div{.info} If you're using a file share or local directory path, your system administrator must enable this feature before you can proceed. 1. Navigate to **Configuration ➜ Features**. 2. Expand the **Allow creation with Local or SMB paths** section by clicking on it. 3. Toggle the selection to either **Enabled** or **Disabled**, and click **SAVE**. ::: On the test page, you can check whether the feed is working by searching for packages: :::figure ![](/docs/img/packaging-applications/package-repositories/images/external-feed-search.png) ::: Learn more about [hosting your own NuGet Feeds](https://docs.nuget.org/create/hosting-your-own-nuget-feeds) :::div{.info} Note: Local packages must be stored in a single folder. Octopus does not currently support hierarchical local NuGet feeds. ::: ## NuGet.Server performance A popular external NuGet hosting option is **NuGet.Server**. However, be aware that it suffers from performance problems when dealing with large packages or large numbers of smaller packages. Users may report high CPU usage, timeouts when displaying package details, or memory issues. A great alternative that we recommend is [NuGet.Lucene](https://github.com/themotleyfool/NuGet.Lucene). The built-in NuGet server in Octopus stores metadata in SQL Server, and doesn't suffer from these performance issues. ## Troubleshooting NuGet feeds - For network file shares, keep in mind that Octopus and Tentacle run under system accounts by default, which may not have access to the file share. - NuGet.Server only allows 30MB packages [by default](https://help.octopus.com/t/30mb-default-maximum-nuget-package-size/3498). A good first step for diagnosing NuGet feed issues is to ensure that the NuGet command line executable can access the same feed from the Octopus Server or target machine if the `Each Tentacle will download the package directly from the remote server` option is selected. The following steps can be used to troubleshoot NuGet feeds. Run the command: ```bash nuget list -Source http://example.com/MyFeed/nuget/v3/index.json ``` replacing `http://example.com/MyFeed/nuget/v3/index.json` with the path to the NuGet V3 URL. The expected output of this command is a list of the packages in the repository. If this command prompts for credentials, then the feed is most likely private, and Octopus will need to be configured with the same credentials. If the repository can not be accessed, you will see an error like: ```text Unable to load the service index for source http://example.com/MyFeed/nuget/v3/index.json. ``` along with additional details that can look like: - Response status code does not indicate success: 404 (Not Found). - An error occurred while sending the request. The remote name could not be resolved: 'hostname'. These errors give you an indication as to why NuGet could not access the requested server. # Community step templates Source: https://octopus.com/docs/projects/community-step-templates.md Community step templates are publicly available step templates that have been contributed by the Octopus Community, they're third party code that is licensed under [the Apache 2.0 license](https://github.com/OctopusDeploy/Library/blob/master/LICENSE.txt). If you can't find a built-in step template that includes the actions you need, you should check the community step template library. There is a large number and variety of step templates (and it's growing all the time) that can help you automate your deployments without writing any scripts yourself. Octopus Community step templates integration is enabled by default, but it can be disabled. ## Enable or disable community step templates integration 1. Navigate to **Configuration ➜ Features**. 2. Expand the **Octopus Community Step Template** section by clicking on it. 3. Toggle the selection to either **Enabled** or **Disabled**, and click **SAVE**. ## Community step template synchronization The community step templates are synchronized with the Octopus Server. The synchronization process is executed as a standard Octopus task and you can view its execution details from the **Tasks** area. The Octopus Server synchronizes with the [Octopus Library](https://library.octopus.com/) on startup and then every 24 hours over the Internet thus it requires Internet access. If there are any updates or changes, the sync process retrieves all the step templates and stores the relevant community step templates in the Octopus database. Step templates are persisted locally, but they cannot be used in a deployment process until they are explicitly installed. The Octopus Server uses a sync task to connect to [https://library.octopus.com/](https://library.octopus.com/) over https (port 443). If you don't see any Community Step Templates after enabling the feature, verify outbound traffic is enabled on port 443. NOTE: The relevant permissions to install and manage step templates are ActionTemplateCreate, ActionTemplateEdit, ActionTemplateView and ActionTemplateDelete. ## Adding community step templates Unlike the built-in steps included in Octopus, you need to install Community Step Templates. There are three ways you can do this: - As you define your deployment processes. - From the **Library** area of the Octopus Web Portal. - By importing them from the [Community Library](https://library.octopus.com/). ## Add a community step template as you define the deployment process 1. Navigate to your [project's](/docs/projects) overview page by selecting **Projects** and clicking on the project you are working with. 2. Click the **DEFINE YOUR DEPLOYMENT PROCESS** button, and click **ADD STEP**. 3. Scroll past the built-in step templates, and find the Community Step Template you want either by choosing from the available technologies or clicking **SHOW ALL**. 4. Before you install the template you can click **VIEW DETAILS** to view the parameters of the step and the source code. 5. To install the step template, hover over the step template's card and click **INSTALL AND ADD** and **SAVE**. After the step template has been installed, it will be available alongside the built-in step templates. ## Add a community step template from the Octopus library 1. In the Octopus Web Portal, navigate to **Deploy ➜ Manage ➜ Step Templates**. 2. Click **BROWSE LIBRARY**. 3. Find the Community Step Template you want either by choosing from the available technologies or clicking **SHOW ALL**. 4. Before you install the template you can click **VIEW DETAILS** to view the parameters of the step and the source code. 5. To install the step template, hover over the step template's card and click **INSTALL** and **SAVE**. After the step template has been installed, it will be available alongside the built-in step templates. ## Import a community step template from the community library If the Community Step Template feature has been disabled, you can still use community step templates by manually importing the JSON file (which contains all information required by Octopus) from the [Community Library](https://library.octopus.com/) into the step template library in Octopus. 1. Navigate to the [Community Library](https://library.octopus.com/) website. 2. Find the template you want to use, review the details, and click the **Copy to clipboard** button. 3. Navigate to **Library ➜ Step Templates** in the Octopus Web Portal and select **Import** from the custom step templates section. 4. Paste in the JSON document for the Step Template into the import window and click **SAVE**. After the step template has been installed, it will be available alongside the built-in step templates. ## Adding an updated version of a community step template Sometimes updates are available for step templates. In this case, you will notice the step template has an option to update the step. If you select update, you will be taken to the community step details with the option to update the latest version of the step template. Community step templates can also be updated from the library as needed. ## Raising issues with a community step template Issues can occur with community step templates, just as they can with built-in steps. That might be due to a deprecated technology or library used in a step, an untested scenario, or something as simple as a typo in a script. If you run into any problems with a community step template, don't worry - [we are always here to help!](https://octopus.com/support) Our community step templates live in our [Library repository](https://github.com/OctopusDeploy/Library) on GitHub. If you're familiar with GitHub, you can raise an [issue](https://github.com/OctopusDeploy/Library/issues), and a member of the Octopus team will triage the issue and work with you to get the issue resolved. In addition, as the code is open-source, you can also submit a [pull request](https://github.com/OctopusDeploy/Library/pulls) to fix an issue. We have [contributing guidelines](https://github.com/OctopusDeploy/Library/blob/master/.github/CONTRIBUTING.md) that we recommend reading before submitting a change. ## Security Community step templates are created, updated, and fixed by the Octopus team and the Octopus community. The Octopus team reviews all contributions before they are added to the Octopus library so that the step template only does what the template is designed to do and nothing malicious. # Troubleshooting .NET configuration transforms Source: https://octopus.com/docs/projects/steps/configuration-features/configuration-transforms/troubleshooting-configuration-transforms.md If you're new to .NET configuration transformation, first check the package(s) part of the deployment are structured and contain what you expect. Following on from that review the deployment logs and output of the package(s) on your deployment targets to get investigate any unexpected behavior. You can try using the `Octopus.Action.Package.TreatConfigTransformationWarningsAsErrors` variable defined in the [System Variables](/docs/projects/variables/system-variables) section of the documentation while you set it up the first time. ## Advanced .NET configuration transforms examples .NET Configuration transforms can sometimes be complicated to set up. As a general rule, it's best to have both configuration file and transform file in the same directory, however, this is not always achievable. This page lists the supported scenarios and the transform definitions required to apply the transform. ## Supported scenarios {#AdvancedConfigurationTransformsExamples-Supportedscenarios}




      Target
      Absolute Path Relative Path Filename

      Wildcard Prefixed

      Absolute Path

      Wildcard Prefixed

      Relative Path

      Wildcard Prefixed

      Filename




      Transform




      Absolute Path not supported not supported Example not supported Example Example
      Relative Path not supported Example Example not supported Example Example
      Filename not supported Example Example not supported Example Example
      Wildcard Absolute Path not supported not supported Example not supported Example Example
      Wildcard Relative Path not supported Example Example not supported Example Example
      Wildcard Filename not supported Example Example not supported Example Example
      :::div{.hint} **Wildcard support** Please note that wildcards can be used anywhere in the transform filename (eg `*.mytransform.config` or `web.*.config`), but can only be used at the start of the target filename (eg `*.mytransform.config`, but **not** `web.*.config`) ::: :::div{.hint} **Enable detailed transform diagnostics logging** To enable detailed logging of the process that searches for config transformations, add the variable `Octopus.Action.Package.EnableDiagnosticsConfigTransformationLogging`and set its value to `True.` ::: ## Transform and target are in the same directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Transformandtargetareinthesamedirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─web.config └─web.mytransform.config ``` Then the transform **web.mytransform.config => web.config** will: - Apply the transform **web.mytransform.config** to file **web.config**. ## Applying a transform against a target in a different folder {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingatransformagainstatargetinadifferentfolder} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | └─web.config └─web.mytransform.config ``` Then the transform **web.mytransform.config => config\web.config** will: - Apply the transform **web.mytransform.config** to file **config\web.config**. ## Transform and multiple targets are in the same directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Transformandmultipletargetsareinthesamedirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.config ├─connstrings.mytransform.config └─web.config ``` Then the transform **connstrings.mytransform.config => \*.config** will: - Apply the transform **connstrings.mytransform.config** to file **web.config**. - Apply the transform **connstrings.mytransform.config** to file **app.config**. ## Applying a transform against multiple targets in a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingatransformagainstmultipletargetsinadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | ├─app.config | └─web.config └─connstrings.mytransform.config ``` Then the transform **connstrings.mytransform.config => config\\*.config** will: - Apply the transform **connstrings.mytransform.config** to file **config\web.config**. - Apply the transform **connstrings.mytransform.config** to file **config\app.config**. ## Using an absolute path to the transform {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Usinganabsolutepathtothetransform} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─subdir | └─web.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms └─web.mytransform.config ``` Then the transform **c:\transforms\web.mytransform.config** => **web.config** will: - Apply the transform **c:\transforms\web.mytransform.config** to file **web.config**. - Apply the transform **c:\transforms\web.mytransform.config** to file **subdir\web.config**. ## Applying a transform with an absolute path to a target in the extraction path root {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Usinganabsolutepathtothetransformxtractiondirectoryroot} :::div{.hint} This transform is available in **Octopus Server 3.8.8** (Calamari 3.6.43) or later ::: Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─subdir | └─web.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms └─web.mytransform.config ``` Then the transform **c:\transforms\web.mytransform.config => .\web.config** will: - Apply the transform **c:\transforms\web.mytransform.config** to file **web.config**. ## Applying a transform with an absolute path to a target relative to the extraction path {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-UsinganabsolutepathtothetransformRelativetoextractiondirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─subdir | └─web.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms └─web.mytransform.config ``` Then the **transform c:\transforms\web.mytransform.config => .\subdir\web.config** will: - Apply the transform **c:\transforms\web.mytransform.config** to file **subdir\web.config**. ## Applying a transform with an absolute path against multiple files in a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingatransformwithanabsolutepathagainstmultiplefilesinadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg └─config ├─app.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms └─connstrings.mytransform.config ``` Then the transform **c:\transforms\connstrings.mytransform.config => config\\*.config** will: - Apply the transform **c:\transforms\connstrings.mytransform.config** to file **config\web.config**. - Apply the transform **c:\transforms\connstrings.mytransform.config** to file **config\app.config**. ## Using an absolute path to the transform, and applying it against multiple files {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Usinganabsolutepathtothetransformandapplyingitagainstmultiplefiles} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms └─connstrings.mytransform.config ``` Then the transform **c:\transforms\connstrings.mytransform.config => \*.config** will: - Apply the transform **c:\transforms\connstrings.mytransform.config** to file **web.config**. - Apply the transform **c:\transforms\connstrings.mytransform.config** to file **app.config**. ## Applying a transform from a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-ApplyingatransformfromadifferentdirectoryApplyingatransformfromadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─transforms | └─web.mytransform.config └─web.config ``` Then the transform **transforms\web.mytransform.config => web.config** will: - Apply the transform **transforms\web.mytransform.config** to file **web.config**. ## Applying a transform to a target in a sibling directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingatransformtoatargetinasiblingdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | └─web.config └─transforms └─web.mytransform.config ``` Then the transform **transforms\web.mytransform.config => config\web.config** will: - Apply the transform **transforms\web.mytransform.config** to file **config\web.config**. ## Applying a transform from a different directory against multiple files {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingatransformfromadifferentdirectoryagainstmultiplefiles} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.config ├─transforms | └─connstrings.mytransform.config └─web.config ``` Then the transform **transforms\connstrings.mytransform.config => \*.config** will: - Apply the transform **transforms\connstrings.mytransform.config** to file **web.config**. - Apply the transform **transforms\connstrings.mytransform.config** to file **app.config**. ## Applying a transform to multiple targets in a sibling directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingatransformtomultipletargetsinasiblingdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | ├─app.config | └─web.config └─transforms └─connstrings.mytransform.config ``` Then the transform **transforms\connstrings.mytransform.config => config\\*.config** will: - Apply the transform **transforms\connstrings.mytransform.config** to file **config\web.config**. - Apply the transform **transforms\connstrings.mytransform.config** to file **config\app.config**. ## Applying multiple transforms to a single target where both are in the same directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingmultipletransformstoasingletargetwherebothareinthesamedirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─connstrings.mytransform.config ├─security.mytransform.config └─web.config ``` Then the transform **\*.mytransform.config => web.config** will: - Apply the transform **security.mytransform.config** to file **web.config**. - Apply the transform **connstrings.mytransform.config** to file **web.config**. ## Wildcard transform with wildcard in the middle of the filename to a single target where both are in the same directory {#AdvancedConfigurationTransformsExamples-Wildcardtransformwithwildcardinthemiddleofthefilenametoasingletargetwherebothareinthesamedirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─MyApp.connstrings.octopus.config ├─MyApp.nlog_octopus.config └─MyApp.WinSvc.exe.config ``` Then the transform **MyApp.\*.octopus.config => MyApp.WinSvc.exe.config** will: - Apply the transform **MyApp.connstrings.octopus.config** to file **MyApp.WinSvc.exe.config**. ## Applying multiple transforms to a single target in a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingmultipletransformstoasingletargetinadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | └─web.config ├─connstrings.mytransform.config └─security.mytransform.config ``` Then the transform **\*.mytransform.config => config\web.config** will: - Apply the transform **security.mytransform.config** to file **config\web.config**. - Apply the transform **connstrings.mytransform.config** to file **config\web.config**. ## Applying multiple transforms against multiple targets {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingmultipletransformsagainstmultipletargets} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.config ├─app.mytransform.config ├─web.config └─web.mytransform.config ``` Then the transform **\*.mytransform.config => \*.config** will: - Apply the transform **web.mytransform.config** to file **web.config**. - Apply the transform **app.mytransform.config** to file **app.config**. ## Applying multiple transforms against multiple targets in a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingmultipletransformsagainstmultipletargetsinadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.mytransform.config ├─config | ├─App.config | └─web.config └─web.mytransform.config ``` Then the transform **\*.mytransform.config => config\\*.config** will: - Apply the transform **web.mytransform.config** to file **config\web.config**. - Apply the transform **app.mytransform.config** to file **config\app.config**. ## Applying multiple absolute path transforms to the same target file {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingmultipleabsolutepathtransformstothesametargetfile} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─subdir | └─web.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms ├─connstrings.mytransform.config └─security.mytransform.config ``` Then the transform **c:\transforms\\*.mytransform.config** => **web.config** will: - Apply the transform **c:\transforms\connstrings.mytransform.config** to file **web.config**. - Apply the transform **c:\transforms\security.mytransform.config** to file **web.config**. - Apply the transform **c:\transforms\connstrings.mytransform.config** to file **subdir\web.config**. - Apply the transform **c:\transforms\security.mytransform.config** to file **subdir\web.config**. ## Using an absolute path wildcard transform and multiple targets {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Usinganabsolutepathwildcardtransformandmultipletargets} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.config ├─subdir | ├─app.config | └─web.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms ├─app.mytransform.config └─web.mytransform.config ``` Then the transform **c:\transforms\\*.mytransform.config => \*.config** will: - Apply the transform **c:\transforms\web.mytransform.config** to file **web.config**. - Apply the transform **c:\transforms\app.mytransform.config** to file **app.config**. - Apply the transform **c:\transforms\web.mytransform.config** to file **subdir\web.config**. - Apply the transform **c:\transforms\app.mytransform.config** to file **subdir\app.config**. ## Using an absolute path for multiple transforms against multiple relative files {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Usinganabsolutepathformultipletransformsagainstmultiplerelativefiles} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg └─config ├─app.config └─web.config ``` And the following files exist: ```powershell c:\ └─transforms ├─app.mytransform.config └─web.mytransform.config ``` Then the transform **c:\transforms\\*.mytransform.config** => **config\\*.config** will: - Apply the transform **c:\transforms\web.mytransform.config** to file **config\web.config**. - Apply the transform **c:\transforms\app.mytransform.config** to file **config\app.config**. ## Applying multiple relative transforms against a specific target {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingamultiplerelativetransformsagainstaspecifictarget} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─transforms | ├─connstrings.mytransform.config | └─security.mytransform.config └─web.config ``` Then the transform **transforms\\*.mytransform.config => web.config** will: - Apply the transform **transforms\connstrings.mytransform.config** to file **web.config**. - Apply the transform **transforms\security.mytransform.config** to file **web.config**. ## Applying multiple transforms in a different directory to a single target in a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingmultipletransformsinadifferentdirectorytoasingletargetinadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | └─web.config └─transforms ├─connstrings.mytransform.config └─security.mytransform.config ``` Then the transform **transforms\\*.mytransform.config => config\web.config** will: - Apply the transform **transforms\connstrings.mytransform.config** to file **config\web.config**. - Apply the transform **transforms\security.mytransform.config** to file **config\web.config**. ## Applying transforms from a different directory to multiple targets {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingtransformsfromadifferentdirectorytomultipletargets} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─app.config ├─transforms | ├─app.mytransform.config | └─web.mytransform.config └─web.config ``` Then the transform **transforms\\*.mytransform.config => \*.config** will: - Apply the transform **transforms\web.mytransform.config** to file **web.config**. - Apply the transform **transforms\app.mytransform.config** to file **app.config**. ## Applying transforms from a different directory to targets in a different directory {#AdvancedConfigurationTransformsExamples-AdvancedConfigurationTransformsExamples-Applyingtransformsfromadifferentdirectorytotargetsinadifferentdirectory} Given a package which has the structure: ```powershell Acme.Core.1.0.0.nupkg ├─config | ├─app.config | └─web.config └─transforms ├─app.mytransform.config └─web.mytransform.config ``` Then the transform **transforms\\*.mytransform.config => config\\*.config** will: - Apply the transform **transforms\web.mytransform.config** to file **config\web.config**. - Apply the transform **transforms\app.mytransform.config** to file **config\app.config**. # Structured configuration variables Source: https://octopus.com/docs/projects/steps/configuration-features/structured-configuration-variables-feature.md :::div{.info} This Configuration Feature was previously called JSON Configuration Variables. In version **2020.4.0**, we added support for YAML, XML, and Properties configuration file replacements and renamed the feature Structured Configuration Variables. ::: With the **Structured Configuration Variables** feature you can define [variables](/docs/projects/variables) in Octopus for use in JSON, YAML, XML, and Properties configuration files of your applications. This lets you define different values based on the scope of the deployment. Settings are located using a structure-matching syntax, so you can update values nested inside structures such as JSON objects and arrays, YAML mappings and sequences, and XML elements and attributes. XPath is used for XML files, and similar expressions are used for the other formats. ## Configuring the structured configuration variables feature 1. To enable Structured Configuration Variables on a [step](/docs/projects/steps) that supports the feature, click the **CONFIGURE FEATURES** link, select **Structured Configuration Variables**, then click **OK**. 2. In the **Structured Configuration Variables** section of the step, specify the relative paths to your structured configuration files, relative to the working directory. For instance: ``` approot\packages\ASPNET.Core.Sample\1.0.0\root\appSettings.json ``` or ``` **/application.yaml ``` :::div{.info} If you are using a **Run a script** step, packages are extracted to a subdirectory with the name of the package reference. Please refer to [package files](/docs/deployments/custom-scripts/run-a-script-step/#referencing-packages-package-files) to learn more. ::: Octopus will find the target files, match structures described by the names of Octopus variables, and replace their contents with the values of the variables. ### Selecting target files {#StructuredConfigurationVariablesFeature-SelectingTargetFiles} Specify files that should have variable replacement applied to them. Multiple files can be supplied by separating them with a new line. You can supply full paths to files, use wildcards to find multiple files in a directory, or use wildcards for a directory to find all files at that level or deeper: **Specific file path** ``` ExampleProject\appSettings.json ``` **Match any .yaml files in the root directory** ``` *.yaml ``` **Match any .json files in the specified directory** ``` Config\*.json ``` **Match any .xml files in the specified directory or deeper** ``` Application/**/*.xml ``` The **Target File** field also supports [Variable Substitution Syntax](/docs/projects/variables/variable-substitutions), to allow things like referencing environment-specific files, or conditionally including them based on scoped variables. [Extended template syntax](/docs/projects/variables/variable-substitutions/#extended-syntax) allows conditionals and loops to be used. ### How the file type for target files is determined **Structured Configuration Variables** allows for replacement in JSON, YAML, XML, and Properties files. To determine what file type is being used, Octopus will first try and parse the file as JSON, and if it succeeds, it will treat the file as JSON. This is to ensure backwards compatibility, because this feature previously only supported JSON files. If the file doesn't parse as JSON, Octopus refers to its file extension. If it is `yaml` or `yml`, the file will be parsed as YAML, if the extension is `xml`, the file will be parsed as XML, and finally if the extension is `properties` the file will be parsed as a Java Properties format. If the file extension is not recognized (for example, a file with a `config` file extension), Octopus will try to parse the files using each of the supported formats until a matching format is found. ### Variable replacement {#variable-replacement} Octopus uses variable names to identify the structures that should be replaced within the target files. If a structure within a target file has a hierarchical location that matches a variable name, its content will be replaced with the variable's value. The hierarchical location is identified differently depending on the type of target file: - In JSON and YAML files, each location is identified by the sequence of keys leading to it from the root level, separated by `:`. - In XML files, structures can be identified by setting Octopus variable names to XPath expressions. - In Java Properties files, they have their keys matched against Octopus variable names. An example for each supported file type can be found in the following table: | Format | Input file | Octopus variable name | Octopus variable value | Output file | | ------ | ---------- | ---- | ----- | ----------- | | JSON | {"app": {"port": 80 }} | `app:port` | 4444 | {"app": {"port": 4444}} | | YAML | app:
        port: 80 | `app:port` | 4444 | app:
        port: 4444 | | XML | <app><port>80</port></app> | `/app/port` | 4444 | <app><port>4444</port></app> | | Java Properties | app_port: 80 | `app_port` | 4444 | app_port: 4444 | #### Variable names starting with the word Octopus When targeting JSON and YAML files, care should be taken when naming variables to be used with the Structured Configuration Variables feature; Specifically, to avoid the use of the word `Octopus` in the name where possible. This is because Octopus provides a number of [system variables](/docs/projects/variables/system-variables) that start with the word `Octopus` that aren't intended for use with this feature. :::div{.warning} Any variables that start with `Octopus` that **aren't** followed with a `:` are ignored when performing variable replacement on JSON and YAML files. ::: Consider the following JSON input file: ```json { "OctopusServer": { "WebPort": "80" } } ``` If you had a variable named `OctopusServer:WebPort` with value `8080`, the value would *not be replaced* as the variable name starts with the word `Octopus`. The easiest way to workaround this is to change the name of your variable to start with something other than the word `Octopus`. #### Variable casing Octopus matches variable names to the structure in target files in a **case-insensitive way**. For example, given the following JSON input file: ```json { "app": { "port": "80" } } ``` If you had a variable named `APP:PORT` with value `8080`, the value would be replaced despite the name of the variable being in upper case. The output file would become: ```json { "app": { "port": "8080" } } ``` For more information, refer to our [variable casing](/docs/projects/variables/#variable-casing) documentation. ## JSON and YAML ### Simple variables Given this example of a target config file: ```json { "weatherApiUrl": "dev.weather.com", "weatherApiKey": "DEV1234567", "tempImageFolder": "C:\temp\img", "port": 8080, "debug": true } ``` If you define [variables](/docs/projects/variables) in your Octopus project called `weatherApiUrl`, `weatherApiKey`, `port`, and `debug` with the values `test.weather.com`, `TEST7654321`, `80`, and `false`, the target config file is updated to become: ```json { "weatherApiUrl": "test.weather.com", "weatherApiKey": "TEST7654321", "tempImageFolder": "C:\temp\img", "port": 80, "debug": false } ``` Note that the `tempImageFolder` setting remains untouched, and the types of `port` and `debug` have not been changed. Octopus will attempt to keep the original type if the new value matches the type of the old value. ### Hierarchical variables It is common (and encouraged) to use hierarchical variables in Structured configuration files. This is supported in Octopus variables by using a nested path syntax delimited by *colon* characters. For example, to update the value of `weatherApi.url` and `weatherApi.key` in the target config file you would configure the Octopus variables `weatherApi:url` and `weatherApi:key`. **Hierarchical JSON** ```json { "weatherApi": { "url": "dev.weather.com", "key": "DEV1234567" } } ``` **Hierarchical YAML** ```yaml weatherApi: url: dev.weather.com key: DEV1234567 ``` You can also replace an entire object. For the example above, you could set Octopus variable `weatherApi` to a value of `{"url":"test.weather.com","key":"TEST7654321"}`, which will result in this: **Replaced Hierarchical JSON** ```json { "weatherApi": { "url": "test.weather.com", "key": "TEST7654321" } } ``` **Replaced Hierarchical YAML** ```yaml weatherApi: url: test.weather.com key: TEST7654321 ``` ### JSON Array or YAML Sequence variables Octopus can replace a value in a JSON array or a YAML sequence by using the zero-based index of the array or sequence in the variable name. If we take the following examples: **Example Hierarchical JSON** ```json { "foo": { "bar": [ "item1", "item2" ] } } ``` **Example Hierarchical YAML** ```yaml foo: bar: - item1 - item2 ``` Variables can be set for `foo:bar:1` with a value `qux` which will update the value of the second element in the array or sequence to be `qux`, like so: **Replaced Array Index Hierarchical JSON** ```json { "foo": { "bar": [ "item1", "qux" ] } } ``` **Replaced Sequence Index Hierarchical YAML** ```yaml foo: bar: - item1 - qux ``` It's possible to replace an entire array or sequence too. In the previous example, if the Octopus variable `foo:bar` was set to `["baz","qux"]`, it would create outputs like: **Replaced Array Hierarchical JSON** ```json { "foo": { "bar": [ "baz", "qux" ] } } ``` **Replaced Sequence Hierarchical YAML** ```yaml foo: bar: - baz - qux ``` The properties of objects in arrays can be replaced. In the example below, defining an Octopus variable `foo:bar:0:url` with the value of `test.weather.com` replaces the `url` property of the first object in the array: **Replaced Object Property in Array Hierarchical JSON** ```json { "foo": { "bar": [ { "url": "test.weather.com", "key": "DEV1234567" } ] } } ``` **Replaced Map Property in Sequence Hierarchical YAML** ```yaml foo: bar: - url: test.weather.com key: DEV1234567 ``` ## XML For XML files, the values to replace are located using the standard XPath syntax. Octopus supports both XPath 1 and XPath 2. Octopus variables with names that are valid XPath expressions are matched against the target XML files. For example, if you have a variable called `//environment` with the value `production`, it will replace the contents of all `` elements with `production`. ### Replacing content When replacing content, the replacement can only be as rich as what was originally there. If you select an element that contains only text, the replacement will be treated as text and structure-defining characters will be encoded as entity references. However, if you select an element that contains further element structures, the replacement is treated as an XML fragment, and structure-defining characters will be added as is. This means that if you replace a password or connection string, any characters like `&`, `<` and `>` will be safely encoded within the string. For example, assume the target file contains the following: ```xml Server=.;Database=db;User Id=admin;Password=password; ``` If you define a variable called `//connectionString` with the value `Server=.;Database=db;User Id=admin&boss;Password=Pass1;` the structure will be updated as follows: ```xml Server=.;Database=db;User Id=admin&boss;Password=Pass<word>1; ``` :::div{.info} This behavior of escaping special characters is [a requirement of the XML specification](https://www.w3.org/TR/2008/REC-xml-20081126/#syntax) (see section 2.4 for specifics), but any library or framework (such as IIS) reading the resulting XML document will automatically handle decoding those special characters when the value is retrieved. ::: It's worth noting that an empty element, such as ``, contains no element structures and will only be filled with text. For example, assume the target file contains the following: **Empty XML Element** ```xml ``` If the Octopus variable `/configuration/logging/rules` is specified with the value ``, the value will be encoded as text, becoming: **Empty XML Element Filled** ```xml <rule level='trace' /> ``` However, if the variable is named `/configuration/logging` to match the parent element, with the value ``, the value will be treated as an XML fragment because it is replacing an element structure (the `` element). This becomes: **Empty XML Element Parent Replaced** ```xml ``` ### Replacing mixed content elements Sometimes an element will contain a mixture of text and element structures. An example of this is: ```xml This is mixed content ``` Because it contains an element structure, a replacement will be treated as an XML fragment. A variable named `/document` with the value of `` would result in: ```xml ``` Another option is to match and replace individual text nodes. A variable named `/document/child::text()[1]` with the value `just ` would result in: ```xml just <text>mixed content ``` ### Replacing attributes Matching and replacing attribute values is supported with XPath. For example, assume the target file contains the following: ```xml admin@example.com user@example.com ``` With the Octopus variable `/configuration/email/@role` with the value `developer`, the output will look like: ```xml admin@example.com user@example.com ``` Alternatively, to replace an element *based on its attribute*, you can apply the condition as a predicate. With a variable named `/configuration/email[@role='admin']` with the value `chief@example.org`, the output will look like: ```xml chief@example.org user@example.com ``` Similar to the examples above, you can also replace other attribute values. With a variable named `/configuration/email[@role='admin']/@address` with the value `chief@example.org`, the output will look like: ```xml ``` ### XML CDATA sections CDATA sections can be replaced just like any other node by selecting them with the XPath. When the content of the CDATA section is replaced, the CDATA presentation is maintained in the output. In the following example, `development` in the CDATA tag can be replaced with `prod<1>` by having a variable `/document/environment/text()` with the value `prod<1>`: **XML Structure with CDATA** ```xml ``` **XML Structure with CDATA Replaced** ```xml ]]> ``` ### Processing instructions Processing instructions can be replaced using the XPath processing instruction selector like so: `/document/processing-instruction('xml-stylesheet')`. When replacing a processing instruction, it's not possible to replace the individual attributes. The whole processing instruction gets replaced with the supplied value. Take the following example: **XML Structure Processing Instruction** ```xml ``` When the Octopus variable `/document/processing-instruction('xml-stylesheet')` is set to `new value` the output will be the following: **XML Structure Processing Instruction Replaced** ```xml ``` ### Namespaces When parsing the XML document, Octopus collects all namespace declarations for use in XPath expressions, so you can use any of the declared prefixes. One limitation is that if the same prefix is declared more than once in a document, only the first will be available in XPath expressions. Because this is a potentially surprising situation, a warning will be logged, similar to the following: ``` The namespace 'http://octopus.com' could not be mapped to the 'octopus' prefix, as another namespace 'http://octopus.com/xml' is already mapped to that prefix. XPath selectors using this prefix may not return the expected nodes. You can avoid this by ensuring all namespaces in your document have unique prefixes. ``` **Root elements with namespaces** If you have xml files that have a namespace on the root element, you might find your XPath expression doesn't match the root node. XPath provides different ways to select an element. One option to try is using a wildcard namespace in your XPath expression like `/*:rootelement/*:childelement` Given the following xml: ```xml ``` If you wanted to replace the value `localhost`, you could use the XPath expression of: `/*:server/*:properties/*:property[@name='host.name']/@value` ## Java properties Given this example of a target properties file: ``` weatherApiUrl = dev.weather.com weatherApiKey = DEV1234567 tempImageFolder = C:\\temp\\img logsFolder = C:\\logs port = 8080 debug = true ``` If you define [variables](/docs/projects/variables) in your Octopus project called `weatherApiUrl`, `weatherApiKey`, `tempImageFolder`, `port`, and `debug` with the values `test.weather.com`, `TEST7654321`, `D:\temp\img`, `80`, and `false`, the target properties file is updated to become: ``` weatherApiUrl = test.weather.com weatherApiKey = TEST7654321 tempImageFolder = D:\\temp\\img logsFolder = C:\\logs port = 80 debug = false ``` Note that the `logsFolder` setting remains untouched as there was no variable defined to override the value and that `tempImageFolder` has been encoded with the double `\`. Octopus will encode the variable in the correct encoding for the properties file format. Unlike JSON, YAML, and XML, it's not possible to do hierarchical replacement in a properties file as properties files are simple key value files. # Sensitive variables Source: https://octopus.com/docs/projects/variables/sensitive-variables.md As you work with [variables](/docs/projects/variables) in Octopus, there will be times when you work with applications that require configuration values that are considered sensitive information. These should be kept secret, but used as clear-text during deployment. That could be a password or an API Key to an external resource. Octopus provides support for this scenario with **Sensitive variables**. Sensitive variables can be sourced from either: - A Secret Manager/Key Vault using one of our Community step templates. - Octopus itself, with values stored securely using **AES-256 encryption**. ## Values from a Secret Manager/Key Vault {#values-from-key-vaults} Storing sensitive values in Octopus solves many problems, but it's [not a two-way key vault](#how-octopus-handles-sensitive-variables). For this, there are a number of Secret Manager and Key Vault tools available. They also offer additional functionality such as automatic secret rotation and versioning. Octopus supports the retrieval of sensitive values from a number of Secret Manager/Key Vaults through the use of [Community step templates](/docs/projects/community-step-templates) that extend the functionality of Octopus to integrate with them. :::figure ![Azure Key Vault Retrieve Secrets step template](/docs/img/projects/variables/images/azure-keyvault-retrieve-secrets-step-in-process.png) ::: Each of the community step templates work by retrieving secrets from the Secret Manager/Key Vault and create [sensitive output variables](/docs/projects/variables/output-variables/#sensitive-output-variables) for use in your executing deployments and runbooks. Octopus has the following community step templates for integrating with Secret Manager/Key Vault tools: - [AWS Secret Manager](https://octopus.com/blog/using-aws-secrets-manager-with-octopus) - [Azure Key Vault](https://octopus.com/blog/using-azure-key-vault-with-octopus) - [CyberArk Conjur](https://library.octopus.com/step-templates/522c7010-7189-4b2e-a3c8-36cb1759422a/actiontemplate-cyberark-conjur-retrieve-secrets) - [GCP Secret Manager](https://octopus.com/blog/using-google-cloud-secret-manager-with-octopus) - [HashiCorp Vault](https://octopus.com/blog/using-hashicorp-vault-with-octopus-deploy) :::div{.success} View working examples of all of our Secrets Management community step templates in our [samples instance](https://samples.octopus.app/app#/Spaces-822) of Octopus. You can sign in as `Guest` to view them. ::: **Note:** If you choose to use one of the community step templates, it's important to consider who has permission to edit a project deployment or runbook process, and manage step templates to prevent unauthorized access to sensitive values stored in your Secret Manager/Key Vault. ## Sensitive variables stored in Octopus {#sensitive-variables-in-octopus} Variables, such as passwords or API keys can be marked as **sensitive**. Just like non-sensitive variables they can [reference other variables](/docs/projects/variables/#use-variables-in-step-definitions) but be careful with any part of your sensitive variable that could unintentionally be interpreted as an attempted substitution. See also, other [common mistakes](#avoiding-common-mistakes). ### Configuring sensitive variables {#configure-sensitive-variables} To make a variable a **sensitive variable**, either select **Change Type** when entering the value and select **Sensitive**, or enter the variable editor when you are creating or editing the variable. If using the variable editor, on the variable value, click **Open editor**: :::figure ![Open Variable Editor](/docs/img/projects/variables/images/open-variable-editor.png) ::: For variable type, select **Sensitive**. :::figure ![Variable editor](/docs/img/projects/variables/images/variable-editor.png) ::: ### How Octopus handles your sensitive variables {#how-octopus-handles-sensitive-variables} :::div{.hint} Learn more about [security and encryption](/docs/security/data-encryption) in Octopus Deploy. ::: When dealing with sensitive variables, Octopus encrypts these values using: - **AES-256** encryption when they are stored in the Octopus database in versions 2024.4 and newer. - **AES-128** encryption when they are stored in the Octopus database in versions prior to 2024.4. - **AES-128 encryption** any time they are in transmission, or when they are stored on a deployment target as part of a deployment. You can use these sensitive values in your deployment process just like normal [variables](/docs/projects/variables), with two notable exceptions: - Once the variable is saved, Octopus will **never allow you to retrieve the value** via the [REST API](/docs/octopus-rest-api) or the Octopus Web Portal; and - Whenever possible, Octopus will **mask these sensitive values in logs**. ### Choosing which variables should be sensitive Any value you want can be treated as a secret by Octopus. It is up to you to choose the most appropriate balance of secrecy and usability. As a rule of thumb, any individual value which should be encrypted, or masked in logs, should be made sensitive in Octopus. The most straightforward example is a password or key. Make the password or key sensitive and it will be encrypted into the database and masked in the Octopus task logs. Another common example is building a *composite* value using the [variable substitution syntax](/docs/projects/variables/variable-substitutions), like a database connection string. Imagine a variable called `DB.ConnectionString` with the value: `Server=#{DB.Server};Database=#{DB.Database};User=#{DB.Username};Password#{DB.Password};` In this case you should at least make the `DB.Password` variable sensitive so it will be encrypted in the database and masked from any Octopus task log messages like this: `Server=db01.my-company.com;Database=my-database;User=my-user;Password=*****`. You could also make `DB.Username` or any of the other components of this template sensitive. ### Avoiding common mistakes {#avoiding-common-mistakes} Here are some common pitfalls to avoid: - **Avoid logging your sensitive values**: you won't really get any benefit from logging your sensitive variables since they will be masked by Octopus. The masking is provided in case a downstream system logs the sensitive value, inadvertently logging it to the Octopus deployment logs. - **Avoid short values:** only sensitive variables with length **greater than 3** characters will be masked. This is done to prevent false positives causing excessive obfuscation of the logs. Consider 8-30 characters depending on the requirements of your deployment. - **Avoid common language**: see the example below of "broke", use a password generator with high entropy [like this one](https://www.passwordsgenerators.net/). - **Avoid sequences that are interpreted by your scripting language of choice**: For example, certain escape sequences like `$^` will be misinterpreted by PowerShell potentially logging out your sensitive variable in clear-text. - **Sensitivity is not transitive/infectious**: For example, imagine you have a sensitive variable called `DB.Password` and another variable called `DB.ConnectionString` with the value `Server=#{DB.Server};...;Password=#{DB.Password}`; the `DB.ConnectionString` does not become sensitive just because `DB.Password` is sensitive. However, if you happen to write the database connection string to the task log, the password component will be masked like this `Server=db01.my-company.com;...;Password=*****` which is probably the desired outcome. - **Avoid mixing binding expressions and sensitivity**: For example, if you have a variable called `Service.Credential` with the value `Password=#{Service.Password}` and make that variable sensitive, Octopus treats the **literal** value `Password=#{Service.Password}` as sensitive instead of treating the **evaluated** value as sensitive, which might be different to what you would expect. Instead, you should make the variable called `Service.Password` sensitive so the password itself will be encrypted in the database, and subsequently masked in any logs like this `Password=*****`. - **Avoid treating entire files as sensitive**: Imagine you're consuming a YAML file from a variable, and that YAML file contains a secret. Rather than treating the whole YAML file as sensitive, you should create two variables: one sensitive variable containing just the secret, and one non-sensitive variable for the YAML file which uses [variable substitution](/docs/projects/variables/variable-substitutions) to substitute in the sensitive variable. This gives Octopus a much tighter scope when looking for sensitive variables to mask. - **Avoid sequences that are part of the variable substitution syntax**: For example, the sequence `##{` will be replaced by `#{` by logic that's part of [referencing variables](/docs/projects/variables/#use-variables-in-step-definitions) so you would need to escape it by modifying it to be `###{` which will result in `##{`, see also [variable substitution syntax](/docs/projects/variables/variable-substitutions). - **Octopus is not a 2-way key vault**: use a [Secret Manager/Key Vault](#values-from-key-vaults) instead. ## Logging {#logging} :::div{.warning} Avoid logging sensitive values! While Octopus will attempt to mask sensitive values, it is better there is no value to mask in the first place! ::: Octopus/Tentacle will do its best to prevent sensitive values from inadvertently appearing in any logs. For example, if a custom PowerShell script accidentally did this: ```powershell Write-Output "Hello, the password is $Password" ``` Octopus would mask the value from the deployment log, leaving: ``` Hello, the password is ***** ``` Note that this method isn't 100% foolproof. Here are a couple of scenarios that you should be extra-careful about if logging sensitive variables: ### Common language in secrets If your top secret password is "broke", and someone happened to deploy with a PowerShell script with: ```powershell Write-Output "Or watch the things you gave your life to, broken" ``` Then the password might be given away when Octopus prints: ```powershell Or watch the things you gave your life to, *******en ``` The obvious solution is, don't use passwords that are likely to occur in normal logging/language, and avoid writing the values of your secure variables to logs anyway. ### `echo` on Unix-based systems It's very easy to [unintentionally modify a variable when using `echo`](https://stackoverflow.com/q/29378566/16866455), particularly if the variable contains new-lines or other escape characters. In particular, you should [always use double-quotes](https://stackoverflow.com/a/29378567/16866455) around what you `echo`, to prevent unintended processing of variable contents. For example, you should prefer this: ```bash echo "$(get_octopusvariable 'SecretVariable')" ``` over this: ```bash echo $(get_octopusvariable 'SecretVariable') ``` The second approach could trigger evaluation or stripping of special characters within the variable, and result in a log message sufficiently different to the sensitive variable's value that we are unable to match and mask it. Of course, the best protection is not to `echo` potentially sensitive variables at all. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # Unsupported Configuration as Code Scenarios Source: https://octopus.com/docs/projects/version-control/unsupported-config-as-code-scenarios.md The Configuration as Code feature is designed to give you the benefits of source control, branching, reverting, and pull requests while being able to use your tool of choice to manage your processes and non-sensitive variables. While it has many benefits, there are some unsuitable use cases and scenarios. This document will describe each one as well as provide alternatives. ## Core design decision The core design decision is each project in each space has a unique folder in a git repository. A git repository can store several projects across several spaces or store a single project in a single space. But, each project must have a unique folder in a git repository because of all the scaffolding data referenced by a project. That scaffolding data includes (but is not limited to): - Environments - Tenant Tags - Worker Pools - Feeds - Step Templates - Channels / Lifecycles That data is not stored in source control because it is shared across multiple projects. :::div{.warning} An error will occur when Octopus Deploy attempts to load a process from source control with one or more of those items missing. You'll be unable to create releases or access runbooks until those errors are resolved. ::: ## Syncing multiple instances The configuration as code feature is not designed to allow two or more projects on different instances to point to the same folder. We've seen our users attempt to use Configuration as Code to keep the deployment processes in sync across multiple instances. That scenario is unsupported. While it may work initially, it will be harder and harder to manage over time. You will need to keep *all* the scaffolding data in sync across multiple instances. That is [easier said than done](/docs/administration/sync-instances). Step templates will be the most difficult, as having the same step template on all instances, the version has to match. Otherwise, you'll have to worry about settings such as parameters, scripts, package versions, feeds, and more. :::div{.warning} Configuration as Code currently supports storing the deployment process, runbook processes and non-sensitive variables for a project in the Git repo. It does not store sensitive variables. ::: Typically, having two instances results from splitting an Octopus Deploy instance by environment (one instance has Dev/Test the other has Staging/Prod), by Tenant (one instance has test tenants, the other has customers), or both. Pointing multiple instances at the same folder will only work if they *are all exactly the same forever*. ### Octopus Terraform Provider for syncing Use the [Octopus Terraform Provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest/docs) to keep multiple instances in sync. Use Terraform's [variable functionality](https://www.terraform.io/language/values/variables) to manage the differences between the instances. For example, have a variable for environment scoping. One instance populates the environment list with "Test" while the other populates it with **Staging** and **Production**. :::div{.warning} You will still need a process to keep step templates in sync. ::: The downside to this approach is you'll be unable to use the Octopus Deploy UI to manage your deployment processes. In addition, you'll need to convert your existing deployment process into Terraform manually. The files generated by Configuration as Code has a similar syntax as the Terraform provider, but it is not a 1:1 match. ### Separate folders for each instance Another alternative is each instance points to a unique folder in the same GitHub repo. For example, if you had a Developer instance and a Production instance. - .octopus/project-a/dev-instance - .octopus/project-a/production-instance Use a file diff tool, Beyond Compare, Meld, WinMerge, or KDiff, to manually copy specific changes between the two directories. Any instance-specific configuration, such as environment scoping, worker pools, or feeds, would be excluded. :::div{.warning} You will still need a process to keep step templates in sync. ::: The downside to this approach is it is a manual process and prone to error. ## Project templating The configuration as code feature is not designed to allow two or more projects on the same space to point to the same folder. We've seen our users attempt to use Configuration as Code as project templating. This scenario is unsupported. While it may work for the first couple of projects, as all the necessary scaffolding data is there because the projects will be in the same space. However, projects will have subtle differences. Those differences include: - Packages to deploy - Target roles - Worker pools - Tenant Tags - Package Feeds You will find projects are constantly overwriting each other. One option is to have a single git repo for all your projects, with each project saved to a unique folder in the repository. You could then use a file comparison tool to copy changes between projects. That scenario will not scale well. Branch naming conventions would need to be strictly enforced if you configured 50 projects to save to the same git repo. The number of possible branches would exponentially grow with each added project. The chances of a person selecting the wrong branch will subsequently increase. And manually copying changes via a file comparison tool is error-prone and time-consuming. Having a branch per project will partially solve the problem of the subtle differences, but it will be very time-consuming to sync any "main" branch changes with the project branches. You will need to manually sync all the branches or create and maintain a process to handle the syncing. :::div{.warning} Configuration as Code is an all-or-nothing feature. You'll be unable to say manage "some of my deployment process" using Configuration as Code. It is the entire deployment process or nothing. ::: ### Octopus Terraform Provider for templating Use the [Octopus Terraform Provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest/docs) to create a deployment process template. Use Terraform's [variable functionality](https://www.terraform.io/language/values/variables) to manage the different projects. For example, have a variable for target roles; one project has **OctoFX-WebApi** while another uses **RandomQuotes-WebApi**. One advantage to this approach is the flexibility to decide what resources are managed by the Terraform Provider and what resources are managed by users in the Octopus UI. The downside to this approach is you'll be unable to use the Octopus Deploy UI to manage your deployment processes. In addition, you'll need to convert your existing deployment process into Terraform manually. The files generated by Configuration as Code has a similar syntax as the Terraform provider, but it is not a 1:1 match. ## Submodules Submodules are a convenient way to reference one repository from within a subdirectory of another repository. Octopus currently does not support the use of submodules for the storing of Configuration as Code files. This means that your configuration files must all be stored directly in the connected repository. # Issue trackers Source: https://octopus.com/docs/releases/issue-tracking.md Octopus **2019.4** introduced integration with [Jira](/docs/releases/issue-tracking/jira/) and [GitHub](/docs/releases/issue-tracking/github/) issue tracking, and **2019.7.6** added support for [Azure DevOps](/docs/releases/issue-tracking/azure-devops) work item tracking. This integration makes it possible to add links to your GitHub issues, Jira issues, and Azure DevOps work items from releases and deployments in Octopus. For an overview of the integration, the features these integrations enable, and configuration instructions, see: - [Jira integration](/docs/releases/issue-tracking/jira) - [GitHub integration](/docs/releases/issue-tracking/github) - [Azure DevOps integration](/docs/releases/issue-tracking/azure-devops) ## Learn more - [Jira blog posts](https://octopus.com/blog/tag/jira/1) # Runbooks examples Source: https://octopus.com/docs/runbooks/runbook-examples.md When software is in production and customers rely on it, operations teams quickly find themselves needing to support that software with procedures to ensure things stay running smoothly. It could be: - **Routine operations** tasks that happen infrequently. For example: - Patching a server. - Stopping a website. - Renewing SSL certificates. - **Emergency operations** tasks that you have to respond to quickly following an alert. For example: - Failing over to a disaster recovery site. - Restart a server. - **Infrastructure provisioning** tasks that are used with an [elastic or transient](/docs/deployments/patterns/elastic-and-transient-environments) environment. For example: - Deploying an AWS CloudFormation template. - Deploying an Azure ARM template. These procedures can all be automated with [runbooks](/docs/runbooks). # Azure Source: https://octopus.com/docs/runbooks/runbook-examples/azure.md Octopus is great for helping you perform repeatable and controlled deployments of your applications into [Azure](https://azure.microsoft.com/), but you can also use it to manage your infrastructure in Azure. Runbooks can be used to help automate this without having to create new deployment releases. Typical routines could be: - Creating a new [Resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups). - Spinning up a new Virtual Machine. - Managing firewall rules. - Tearing down a Resource group. Out-of-the-box, Octopus provides built-in steps to help manage your infrastructure in Azure: - [Resource Group Templates](/docs/runbooks/runbook-examples/azure/resource-groups). - [Executing PowerShell scripts using the Azure cmdlets](/docs/deployments/custom-scripts/azure-powershell-scripts/). Follow our guide on [running Azure PowerShell scripts](/docs/deployments/azure/running-azure-powershell). ## Learn more - Generate an Octopus guide for [Azure and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?destination=Azure%20websites). - [Azure blog posts](https://octopus.com/blog/tag/azure/1). - [Azure deployment examples](/docs/deployments/azure). # Create PaaS MySQL database server Source: https://octopus.com/docs/runbooks/runbook-examples/databases/create-mysql-paas-server.md Cloud based applications often need databases to store their data. Cloud providers such as Azure, AWS, and Google Cloud Platform (GCP) all offer database Platform as a Service (PaaS) which allows you to create a database server without having to create the underlying infrastructure that goes along with it. These servers are fully managed by the cloud platform provider, allowing you to focus on delivering software instead of worrying about maintenance. This can easily be automated using a runbook. In this example, we'll create a MySQL database server on [Google Cloud](https://cloud.google.com/gcp). :::div{.hint} **gcloud CLI and authorization** Most of the commands used for interacting with Google in this next section make use of the [Google Cloud CLI](https://cloud.google.com/sdk/gcloud). To use the **gcloud** CLI you usually need to authorize it. For further information on gcloud authorization, please refer to the [documentation](https://cloud.google.com/sdk/docs/authorizing). ::: ## Create the runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 1. Give the runbook a name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 1. Add a **Run a script** step to check to see if the server already exists: ```powershell $zone = $OctopusParameters["GCP.Zone"] $projectName = $OctopusParameters["Project.GCP.ProjectName"] $instanceName = $OctopusParameters["Project.GCP.MySQL.InstanceName"] Write-Host "Getting list of MySQL instances with name: $instanceName" Write-Host "##octopus[stderr-progress]" $Names=(& gcloud sql instances list --project=$projectName --filter="name=$instanceName" --format="get(name)" --quiet) -join ", " Test-LastExit "gcloud sql instances list" $dbDoesntExist = $true if( -not ([string]::IsNullOrEmpty($Names))) { Write-Highlight "Found MySQL instance: $Names" $dbDoesntExist = $false } else { Write-Highlight "Found no mysql instance matching $instanceName" } Set-OctopusVariable -name "DatabaseDoesntExist" -value $dbDoesntExist ``` 5. Add another **Run a script** with the run condition of `#{Octopus.Action[Check if MySQL instance exists].Output.DatabaseDoesntExist}` is true: ```powershell $zone = $OctopusParameters["GCP.Zone"] $projectName = $OctopusParameters["Project.GCP.ProjectName"] $instanceName = $OctopusParameters["Project.GCP.MySQL.InstanceName"] $machineTier = $OctopusParameters["Project.GCP.MySQL.MachineTier"] $storageType = $OctopusParameters["Project.GCP.MySQL.StorageType"] $storageAutoIncreaseLimit = $OctopusParameters["Project.GCP.MySQL.StorageIncreaseLimitInGB"] $vpcNetworkName = $OctopusParameters["GCP.Network.VPC.Default"] $rootPassword = $OctopusParameters["MySQL.Database.Admin.UserPassword"] Write-Host "Running gcloud beta sql instances create" Write-Host "##octopus[stderr-progress]" & gcloud beta sql instances create $instanceName ` --tier=$machineTier ` --root-password=$rootPassword ` --zone=$zone --project=$projectName ` --no-backup ` --network=$vpcNetworkName ` --storage-type=$storageType ` --storage-auto-increase ` --storage-auto-increase-limit=$storageAutoIncreaseLimit ` --no-assign-ip ` --quiet Test-LastExit "gcloud beta sql instances create" Write-Host "Completed creating mysql instance" ``` 6. Add project [variables](/docs/projects/variables) for use with the scripts :::div{.hint} If you have a keen eye, you may have noticed that the script above uses the gcloud `beta` option to create the MySQL database server. This is done to allow use of the `--network` flag where you can specify a specific network in which to place the database server. This can be useful if you want to specify a [private ip address](https://cloud.google.com/sql/docs/mysql/configure-private-ip) for your server. ::: In just a few steps, you're able to create a MySQL database server on GCP. ## Samples We have a [Pattern - Rolling](https://oc.to/PatternRollingSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `PetClinic Infrastructure` project. # Updating Linux Source: https://octopus.com/docs/runbooks/runbook-examples/routine/updating-linux.md Like all other operating systems, Linux needs updates and patches to keep it up-to-date and secure. With Runbooks, you could automate the process of performing routine maintenance such as installing updates. Going one step further, you could also schedule this activity using a [scheduled runbook trigger](/docs/runbooks/scheduled-runbook-trigger). ## Create the runbook To create a runbook to perform updates on your Linux machines: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a Name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, and then click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, select **Bash** and add the following code that matches your Linux distro: ```bash Ubuntu # Run update command sudo apt-get update 2>&1 # Check for error if [[ $? -ne 0 ]] then fail_step "apt-get update failed!" fi # List upgradable packages apt list --upgradable 2>&1 # Check for error if [[ $? -ne 0 ]] then fail_step "List update failed!" fi ``` ```bash CentOS/RHEL # Run update command sudo yum check-update 2>&1 # Check for error if [[ $? -ne 0 && $? -ne 100 ]] then fail_step "yum check update failed!" fi ``` This step will download a list of available updates then display them. This step is split out from the actual update process so that you can place any gates such as approvals between listing what is available for update and actually performing the update. 8. Repeat steps 3-7 above, adding the following code to perform the update in the **Inline source code** section that matches your Linux distro: ```bash Ubuntu # Perform upgrade sudo apt-get upgrade -y 2>&1 # Check for error if [[ $? -ne 0 ]] then fail_step "apt-get upgrade failed!" fi ``` ```bash CentOS/RHEL # Perform upgrade sudo yum update -y 2>&1 # Check for error if [[ $? -ne 0 ]] then fail_step "yum update failed!" fi ``` :::div{.info} You'll note the use of `2>&1` which redirects the stderr stream to stdout. Bash writes diagnostic messages to stderr which Octopus interprets as an error so your runbook will show a success with warnings message. The `if` statement checks to see if an error was actually encountered and will fail the step if it errored. ::: ## Samples We have a [Target - Wildfly](https://oc.to/TargetWildflySamplePetClinic) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at this example and more runbooks in the `PetClinic` project. # Authentication Source: https://octopus.com/docs/security/authentication.md Octopus Deploy supports a a range of Identity Providers (IdPs) and common authentication mechanisms out-of-the-box. ## Your octopus.com account (Octopus ID) Octopus ID allows you to log in using the same account that you use to sign in at Octopus.com. This allows you to manage who is able to access Octopus from within your organization and saves you time when moving between our website, your billing console and your instance(s). - [Octopus ID](/docs/security/authentication/octopusid-authentication) ## Identity Provider-based (IdP) Authentication The list below contains Identity Provider-specific integrations. These can be used with Octopus Server or [Octopus Cloud](/docs/octopus-cloud). Please see our [authentication provider compatibility](/docs/security/authentication/auth-provider-compatibility) section for further information. Many are powered by OpenID Connect (OIDC), and therefore Octopus can support any OIDC compliant IdP. The Octopus Okta Authentication provider provides the most flexibility in configuration for generic IdP use. - [Microsoft Entra ID Authentication](/docs/security/authentication/azure-ad-authentication) - [Okta Authentication](/docs/security/authentication/okta-authentication) - [Google Workspace Authentication](/docs/security/authentication/googleapps-authentication) - [OpenID Connect Authentication](/docs/security/authentication/oidc-authentication) ## Directory-based Authentication The list below contains Directory-based authentication mechanisms that are typically used with Octopus Server only. Please see our [authentication provider compatibility](/docs/security/authentication/auth-provider-compatibility) section for further information. - [Active Directory Authentication](/docs/security/authentication/active-directory) - [LDAP Authentication](/docs/security/authentication/ldap) ## Local Authentication The list below contains local authentication mechanisms that are convenient for evaluating, or initial configuration of Octopus Server. We recommend customers use IdP or directory-based authentication where possible, as local authentication does not support password expiry, configurable lockout policies, or password history enforcement. - [Username and Password](/docs/security/authentication/username-password) - [Guest Login](/docs/security/authentication/guest-login) ## Configuring authentication providers You can use the Octopus Web Portal to configure authentication providers by navigating to **Configuration ➜ Settings**. Alternatively, you can configure authentication providers using the `Octopus.Server.exe configure` command-line interface. ## Sign in for the first time If you're using the [Username and Password provider](/docs/security/authentication/username-password), you will need to invite your team to use Octopus and create and manage their user accounts manually. When a user signs in to Octopus for the first time using an external authentication provider, Octopus will automatically create a new user account for them as a convenience. If you prefer to control which users can access Octopus, you can disable auto user creation and manually invite users instead. - Learn about [managing users and teams](/docs/security/users-and-teams). - Learn about [auto user creation](/docs/security/authentication/auto-user-creation). ## Manage teams In Octopus, you can group your users into teams and use the role-based permission system to control what they can see and do. Learn about [managing users and teams](/docs/security/users-and-teams). You can manually manage the members of your teams, or you can configure certain external authentication providers to manage your teams for you automatically. - Learn about [automatically managing teams with Active Directory](/docs/security/authentication/active-directory). - Learn about [automatically managing teams with Microsoft Entra ID](/docs/security/authentication/azure-ad-authentication). - Learn about [automatically managing teams with Okta](/docs/security/authentication/okta-authentication). - Learn about [automatically managing teams with OpenID Connect](/docs/security/authentication/oidc-authentication). ## Auto login When using an external authentication provider, you can configure Octopus to work in one of two ways: 1. Make the user click a button on the Octopus login screen. 2. Automatically sign the user in by redirecting to the external identity provider. Auto login is **disabled by default**, and you can enable it in **Configuration ➜ Settings ➜ Authentication ➜ Auto Login**. Note that even when enabled, **this functionality is only active when there is a single, non forms-based authentication provider enabled**. If multiple providers are enabled, which includes Guest access being enabled, this setting is overridden. ### Auto login and Active Directory When using the Active Directory provider, auto login will only be active when the **Configuration ➜ Settings ➜ Active Directory ➜ Allow Forms Authentication For Domain Users** setting is **false**. ## Associating users with multiple external identities In versions up to 3.5, only a single Authentication Provider could be enabled at a time (either Domain or UsernamePassword). In that scenario Users were managed based on the currently enabled provider and switching providers meant re-configuring Users. With 3.5 comes the ability to have multiple Authentication Providers enabled simultaneously and as such the User management has been adjusted to be provider-agnostic. What does that mean? Let's consider an example scenario. Let's consider that we have UsernamePassword enabled and we create some users, and we've set their email address to their Active Directory domain email address. The users can now log in with the username and password stored against their user record. If we now enable the Active Directory authentication provider, then the users can authenticate using either their original username and password, or they can use a username of user@domain or domain\user along with their domain password, or they can use the Integrated authentication button. In the first scenario they are actually logging in via the UsernamePassword provider, in the latter 2 scenarios they are using the Active Directory provider, but in all cases they end up logged in as the same user (this is the driver behind the fallback checks described in the next section). This scenario would work equally with Microsoft Entra ID or Google Workspace in place of Active Directory. You can also specify the details for multiple logins for each user. For example, you could specify that a user can log is as a specific UPN/SamAccountName from Active Directory or that they could log in using a specific account/email address using Google Workspace. Whichever option is actually used to log in, Octopus will identify them as the same user. ### Matching external identities to Octopus users {#matching-external-identities} When someone signs in to Octopus using an external authentication provider, Octopus will try to find their user account by looking for matching identifiers. It starts by looking for a matching identifiers from the external authentication provider, and will eventually fall back to match on email address. ## Changing authentication providers In some circumstances you may want to move from one authentication provider to another. The best way to do this is have a period of time where you enable both the new and old authentication providers. 1. Make sure all your existing user accounts in Octopus are configured with the email address for the new authentication provider. This is how Octopus will recognize the new external identity and match it to the existing Octopus user account. 2. Enable the new authentication provider and configure it correctly. 3. Test the new authentication provider, making sure it correctly matches your existing users with their existing Octopus user accounts. 4. Disable the old authentication provider. ## Session management User sessions can be managed in two ways with Octopus: 1. Session cookies, which are persisted by the browser after a successful login and then sent with every subsequent HTTP request. 2. API Keys, which are a shared secret that identify the user, and must be sent with every HTTP request. Session cookies are used for interactive sessions regardless of the authentication provider used to identity the user. ### Revoking access to Octopus with external authentication providers Octopus uses the external identity provider only to initially verify the user's identity, and then returns a session cookie to the browser. When you disable a user in your external identity provider, this will prevent that user from signing in to Octopus using that authentication provider. However, if the user already has an active session with Octopus, that session will stay active until the cookie expires. If you want to revoke access to Octopus immediately, disable the user in Octopus as well as the external identity provider. # Troubleshooting authentication problems Source: https://octopus.com/docs/security/authentication/troubleshooting-authentication-problems.md We take every reasonable effort to make Octopus Deploy secure by enabling you to use the best [authentication provider](/docs/security/authentication) for your organization. This guide will help you troubleshoot any problems you may encounter when signing in to the Octopus Deploy portal. ## No authentication providers enabled If you disable all of your authentication providers you will see a message like this when you attempt to load the Octopus portal: `There are no authentication providers enabled.` You will need to enable at least one of the authentication providers in order to sign in. ## Octopus authentication cookie Once you have proven your identity to Octopus Server using one of the supported [authentication providers](/docs/security/authentication), the Octopus Server will issue a cookie so your web browser can make secure requests on your behalf. The following messages may indicate a problem with your browser, or your network, and the Octopus authentication cookie: `The sign in succeeded but we failed to get the resultant permissions for this user account. This can happen if the Octopus authentication cookie is blocked.` This can happen for quite a number of reasons: 1. Your web browser does not support cookies. Configure your browser to accept cookies from your Octopus Server. You may need to ask your systems administrator for help with this. 1. The time is incorrect on your computer, or the time is incorrect on the Octopus Server. This can cause your authentication cookies to expire and become unusable. Correct the time and configure your computers to automatically synchronize their time from a time server. 1. You are using Chrome and have not configured your Octopus Server to use HTTPS. Chrome has started to consider websites served over `http://` as unsafe and will refuse to accept cookies from those unsafe sites. [Configure your Octopus Server to use HTTPS](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https) instead of HTTP. [Learn more about Chrome and the move toward a more secure web](https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html). 1. You are hosting Octopus Server on the same domain as other applications. One of the other applications may be issuing a malformed cookie causing the Octopus authentication cookies to be misinterpreted. Move Octopus Server to a different domain to isolate it from the other applications, or stop the other applications from issuing malformed cookies. See [this GitHub Issue](https://github.com/OctopusDeploy/Issues/issues/2343) for more details. ## Octopus anti-forgery token Octopus Server prevents Cross-Site Request Forgery (CSRF) using an anti-forgery token, which requires support for cookies. The following messages may indicate a problem with your browser, or your network, and the Octopus anti-forgery cookie: `A required anti-forgery token was not supplied or was invalid.` See our [detailed troubleshooting guide](/docs/security/cve/csrf-and-octopus-deploy) for solving problems with anti-forgery validation. ## Active Directory If you are using Active Directory please refer to our [detailed troubleshooting guide](/docs/security/authentication/active-directory/troubleshooting-active-directory-integration). ## External authentication providers If you are using one of the other external authentication providers you may see a message like these: - `User login failed: Missing State Hash Cookie.` - `User login failed: Missing Nonce Hash Cookie.` This can happen for quite a number of reasons: 1. Your web browser does not support cookies. Configure your browser to accept cookies from your Octopus Server. You may need to ask your systems administrator for help with this. 1. The time is incorrect on your computer, or your external authentication provider. This can cause your authentication cookies to expire and become unusable. Correct the time and configure your computers to automatically synchronize their time from a time server. 1. You are using Chrome and have not configured your external authentication provider to use HTTPS. Chrome has started to consider websites served over `http://` as unsafe and will refuse to accept cookies from those unsafe sites. Configure your external authentication provider to use HTTPS instead of HTTP. [Learn more about Chrome and the move toward a more secure web](https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html). ### Getting help from us {#support} If none of the above steps work, you can try toggling the "Remember me on this computer" check-box. If you are still unable to troubleshoot the issue, please get in contact with our [support team](https://octopus.com/support) and send along the following details (feel free to ignore points if they don't apply): a. Which browser and version are you using? (Help > About in your browser is the best place to get this information) b. Does the same thing happen with other browsers, like Internet Explorer, Google Chrome, Firefox? c. Does the same thing happen for other people/users? d. Does the same thing happen when you access Octopus Deploy over another network, like from home or over your cellular network? e. Does the same thing happen if you use InPrivate/Incognito mode in your browser? f. Does the same thing happen after clearing all browser data for the Octopus Server (including cookies, history, local data, stored credentials)? g. Have you used any other versions of Octopus Deploy in this browser before? h. Do you have other web applications hosted on the same server? i. Do you have other web applications hosted on the same domain? (for example: `octopus.mycompany.com` and `myapp.mycompany.com`?) j. Do you have any intermediary network devices (like proxies or web application firewalls) which may be stripping custom HTTP headers or cookies from your requests? k. Do you have Octopus Deploy running inside a Virtual Machine? l. Please [record the problem occurring in your web browser](/docs/support/record-a-problem-with-your-browser) and send the recording to us for analysis. Please record the following steps: Sign out of Octopus Deploy, sign back in again, and then try to do the action that fails. # Data encryption Source: https://octopus.com/docs/security/data-encryption.md This section focuses on securing data in the [Octopus database](/docs/administration/data), [backup files](/docs/administration/data/backup-and-restore/), and other settings in the registry and on disk. For information on how Octopus secures data between Octopus and Tentacles, see [Octopus - Tentacle communication](/docs/security/octopus-tentacle-communication). When an Octopus Server is installed, we generate a special key used for encryption, called the **Master Key**. The Master Key is then encrypted asymmetrically, using [Windows Data Protection](https://learn.microsoft.com/en-us/previous-versions/ms995355(v=msdn.10)), and stored in the Octopus configuration file. The Master Key is then used along with [AES-256](http://en.wikipedia.org/wiki/Advanced_Encryption_Standard) to encrypt certain sensitive data in the Octopus database, including: :::div{.hint} Octopus Server 2024.4 and newer use AES-256 by default but support AES-128 for compatibility. Previous versions use AES-128. ::: - [Sensitive variables](/docs/projects/variables/sensitive-variables). - Private keys used for [Octopus/Tentacle](/docs/security/octopus-tentacle-communication/) communication, and for authenticating with [Azure](/docs/infrastructure/accounts/azure/) and [SSH endpoints](/docs/infrastructure/deployment-targets/linux/ssh-target). - Credentials used to authenticate with [SSH](/docs/infrastructure/accounts/ssh-key-pair/) (for username/password auth) and [external NuGet feeds](/docs/packaging-applications/package-repositories). The practical impact of this is: - While most data in the database is plain text, sensitive data like the examples below are encrypted. - The "Master Key" used to encrypt and decrypt this data is itself encrypted by Windows, using a private key known only by Windows. - If an attacker has access to your Octopus database backup file, but they aren't on the Octopus Server and don't know the Master Key, they won't be able to decrypt the database or other settings. :::div{.problem} **Warning** Without keeping a record of your Master Key, you won't be able to make use of your Octopus database backups, since there is no way to decrypt these sensitive values. ::: ## Your Master Key {#your-master-key} When Octopus is installed, it generates a random string which will be used as the Master Key. You will need to know your Master Key if you ever hope to restore an Octopus backup on another server. ### Getting the Master Key from the Octopus Manager {#getting-key-from-octopus-manager} 1. Open the **Octopus Manager** from the start menu/start screen. 2. Click **View Master Key**. 3. Click **Save** to save the Master Key to a text file or **Copy to clipboard** and then paste the Master Key into a text editor or a secure enterprise password manager, and save it. ### Getting the Master Key from PowerShell {#getting-key-from-powershell} Depending on the version of Octopus Server you are using you may need to use a slightly different parsing:
      Using text ```powershell $MasterKey = .\Octopus.Server.exe show-master-key ```
      Using JSON ```powershell using JSON (if you're in the mood) $MasterKey = (.\Octopus.Server.exe show-master-key --format=json | ConvertFrom-Json).MasterKey ```
      # Deploying to a team tenant Source: https://octopus.com/docs/tenants/guides/multi-tenant-teams/deploying-team-tenant.md Scoping the Teams to their respective tenants give Teams Avengers and Radical the autonomy of deploying Octo Pet Shop without interfering with each other. ## Scoped team dashboard With Team Avengers scoped to their respective tenant, the dashboard for the developer will only show their tenant and environment they have access to. Since the tenant of `OctoPetShop-Team-Avengers` is scoped specifically to Development, Development is all the team sees :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/team-avengers-dashboard.png) ::: ## Scoped team creating a release Developers for Team Avengers have the ability to create a release, but only deploy to their own tenant. When deploying to Development, the OctoPetShop-Team-Avengers tenant is automatically selected. :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/team-avengers-deploy.png) ::: Depending on how you scope your team for Environment, because the `OctoPetShop-Team-Avengers` tenant is only scoped for Development, attempting to deploy to test will result in a missing resource and the **Deploy** button disabled. :::figure ![](/docs/img/tenants/guides/multi-tenant-teams/images/team-avengers-deploy-to-test.png) ::: Previous # Deploying before the concurrency tag is changed Source: https://octopus.com/docs/tenants/guides/tenants-sharing-machine-targets/deploying-before-concurrency-tag.md If we deploy a release to all tenants at the same time, we see that all tasks are running concurrently. This will depend on your task cap and number of other tasks running at the same time. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/all-groups-concurrent-in-progress.png) ::: Once the deployments are complete, we can see that each of the deployments took 2-3 minutes. Since they were running concurrently, the total time for the deployment was a little over 3 minutes. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/all-groups-concurrent-complete.png) ::: If we look at one of the specific task logs, we can see that each step in the deployment to Group 1 - Tenant E had to wait for one or more other tasks to finish before it could start. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/deployment-details-concurrent.png) ::: The time required to complete a deployment in this scenario will grow based on the number of steps targeting the shared infrastructure and the number of tenants in that group being deployed at once. It can also cause tasks to queue for longer than expected since all tasks are running, they are consuming part of the task cap. If you have a task cap of 20, and three infrastructure groups that each host 50 tenants, the tasks for one group can cause the tasks for the other two groups to wait in the queue for quite a while. To remedy this, we can set the `Octopus.Task.ConcurrencyTag` system variable. Previous     Next # Tenant infrastructure Source: https://octopus.com/docs/tenants/tenant-infrastructure.md The hosting model for your infrastructure with tenants will vary depending on your application, customers, and sales model. Here we'll cover two of the most common implementations: - [Dedicated hosting](#dedicated-hosting): You have dedicated deployment targets for each customer. - [Shared hosting](#shared-hosting): You create farms or pools of servers to host all of your customers, achieving higher density. You can design and implement both **dedicated** and **shared** multi-tenant hosting models in Octopus using [environments](/docs/infrastructure/environments/), [deployment targets](/docs/infrastructure/), and [tenant tags](/docs/tenants/tenant-tags). ## Tenanted and untenanted deployments {#tenanted-and-untenanted-deploys} Although we focus on [tenanted deployments](https://octopus.com/use-case/tenanted-deployments) in this section, untenanted deployments deserve some explanation with regards to hosting. Untenanted deployments provide a way for you to start introducing tenants into your existing Octopus configuration. An untenanted deployment is the default in Octopus; a deployment to an environment *without* a tenant. Octopus decides which deployment targets to include in a deployment like this: - **Tenanted deployments** will use **matching tenanted deployment targets**. - **Untenanted deployments** will only use **untenanted deployment targets**. Learn more about the differences between [tenanted and untenanted deployments](/docs/tenants/#tenanted-and-untenanted-deployments). ## Configuring targets for tenanted deployments {#configuring-targets-tenanted-deploy} By default, deployment targets in Octopus Deploy aren't configured for tenanted deployments. To configure the target for tenanted deployments, Select **Deployment Targets** from the main navigation. :::figure ![](/docs/img/tenants/images/octopus-deployment-targets.png) ::: Click on the deployment target you wish to configure for tenanted deployments. In the **Restrictions ➜ Tenanted Deployments** section, you can choose the kinds of deployments the target can be involved in - **Exclude from tenanted deployments** (default) - the deployment target will never be included in tenanted deployments. - **Include only in tenanted deployments** - the deployment target will only be included in deployments to the associated tenants. It will be excluded from untenanted deployments. - **Include in both tenanted and untenanted deployments** - The deployment target will be included in untenanted deployments, and deployments to the associated tenants. :::figure ![](/docs/img/tenants/images/target-restrictions-tenant-deployments.png) ::: ### Choose tenants for target {#choose-tenants-for-target} To choose the tenants to associate with a deployment target navigate to the **Restrictions ➜ Associated Tenants** section of the deployment target. You can select the tenants to allow to deploy to individually, or you can choose from any of the configured [tenant tags](/docs/tenants/tenant-tags). :::figure ![](/docs/img/tenants/images/target-restrictions-associated-tenants.png) ::: :::div{.hint} We generally recommend keeping tenanted and untenanted deployment targets separate, particularly in Production. You could use the same deployment targets for other environments but it's better to avoid this situation. ::: ## Dedicated hosting {#dedicated-hosting} Dedicated hosting ensures the applications for some tenants are completely isolated from those of other tenants. You may want to do this to provide security or performance guarantees that would be problematic in a shared hosting model. To implement dedicated hosting, you need to create the dedicated servers and indicate which tenant will be hosted on those servers. ### Step 1: Configure the dedicated deployment targets {#configure-dedicated-deployment-targets} To configure deployment targets as dedicated hosts for one or more tenants: 1. Select **Deployment Targets** from the main navigation and find the deployment targets that will be used to host the applications for the tenant. 2. Configure each deployment target as a dedicated host for the tenant: :::figure ![](/docs/img/tenants/images/multi-tenant-dedicated-deployment-target.png) ::: ### Step 2: Deploy {#dedicated-hosting-deploy} The final step is to deploy a connected project for this tenant and see the results. You will see how Octopus includes these specific deployment targets in that tenant's deployments, creating an isolated hosting environment for that tenant. :::figure ![](/docs/img/tenants/images/multi-tenant-deployment-dedicated.png) ::: ## Shared hosting {#shared-hosting} Shared hosting allows you to host the applications of multiple tenants on the same machines to reduce hosting costs by increasing density. To implement shared hosting, you need to create a shared server farm and indicate which tenants will be hosted on that farm. This is very similar to the dedicated hosting scenario. Instead of choosing a single tenant, you use a tenant tag to indicate these servers will be hosting applications for multiple tenants. ### Step 1: Create a hosting tag set {#shared-hosting-create-tagset} Firstly let's create a tag set to identify which tenants should be hosted on which shared server farms: 1. Select **Tenant Tag Sets** from the main navigation and create a new tag set called **Hosting**. 2. Add a tag called **Shared-Farm-1** and set the color to green to help identify tenants on shared hosting more quickly: :::figure ![](/docs/img/tenants/images/multi-tenant-shared-tag.png) ::: ### Step 2: Configure the shared server farm {#shared-hosting-configure-shared-farm} Now let's configure some shared servers in a farm: 1. Go to **Infrastructure ➜ Deployment Targets** and find the deployment targets that will be used to host the applications for these tenants. 2. Select the **Shared-Farm-1** tag: :::figure ![](/docs/img/tenants/images/multi-tenant-infra.png) ::: These deployment targets will now be included in deployments for any tenants matching this filter; that is, any tenants tagged with **Hosting/Shared-Farm-1**. ### Step 3: Configure the Tenants to deploy onto the shared server farm {#shared-hosting-configure-tenants} Now let's select some tenants that should be hosted on **Shared-Farm-1**. Create some new tenants (or find existing ones) and tag them with **Shared-Farm-1**: :::figure ![](/docs/img/tenants/images/multi-tenant-shared-server.png) ::: ### Step 4: Deploy {#shared-hosting-deploy} The final step is to deploy a connected project for one of these tenants and see the results. You will see how Octopus includes any matching deployment targets in that tenant's deployments, creating a shared hosting environment for your tenants. # octopus deployment-target delete Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-delete.md Delete a deployment target in Octopus Deploy ```text Usage: octopus deployment-target delete { | } [flags] Aliases: delete, del, rm, remove Flags: -y, --confirm Don't ask for confirmation before deleting the deployment target. Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target delete octopus deployment-target rm ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target kubernetes Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-kubernetes.md Manage Kubernetes deployment targets in Octopus Deploy ```text Usage: octopus deployment-target kubernetes [command] Aliases: kubernetes, k8s Available Commands: create Create a Kubernetes deployment target help Help about any command list List Kubernetes deployment targets view View a Kubernetes deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target kubernetes [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target kubernetes create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target kubernetes create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-kubernetes-create.md Create a Kubernetes deployment target in Octopus Deploy ```text Usage: octopus deployment-target kubernetes create [flags] Aliases: create, new Flags: --account string The name of the account to use for authentication. --aks-cluster-name string The AKS cluster name. --aks-resource-group-name string The AKS resource group name. --aks-use-admin-credentials Enabling this option passes the --admin flag to az aks get-credentials. This is useful for AKS clusters with Azure Active Directory integration. --auth-type string The authentication type to use. --certificate string Name of Certificate in Octopus Deploy. --certificate-path string The path to the CA certificate of the cluster. The default value usually is: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt --client-certificate string Name of client certificate in Octopus Deploy --cluster-url string Kubernetes cluster URL. Must be an absolute URL. e.g. https://kubernetes.example.com --docker-container-registry string The feed of the docker container registry to use if running health check in a container on the worker --docker-image-flags string The image (including the tag) to use from the container registry --eks-assume-service-role Assume a different AWS service role. --eks-assumed-role-arn string ARN of assumed AWS service role. --eks-assumed-role-external-id string AWS assumed role external ID. --eks-assumed-role-session-duration int AWS assumed role session duration in seconds. (defaults to 3600 seconds, 1 hour) --eks-assumed-role-session-name string Session name of assumed AWS service role. --eks-cluster-name string AWS EKS Cluster Name --eks-use-service-role Execute using the AWS service role for an EC2 instance. --environment strings Choose at least one environment for the deployment target. --gke-cluster-name string GKE Cluster Name. --gke-impersonate-service-account Impersonate service accounts. --gke-project string GKE Project. --gke-region string GKE Region. --gke-service-account-emails string Service Account Email. --gke-use-vm-service-account When running in a Compute Engine virtual machine, use the associated VM service account. --gke-zone string GKE Zone. -n, --name string A short, memorable, unique name for this deployment target. --namespace string Kubernetes Namespace. --pod-token-path string The path to the token of the pod service account. The default value usually is: /var/run/secrets/kubernetes.io/serviceaccount/token --role strings Choose at least one role that this deployment target will provide (use --tag for tag sets with validation). --skip-tls-verification Skip the verification of the cluster certificate. This can only be provided if no cluster certificate is specified. --tag strings Target tags in canonical format (TagSetName/TagName). --tenant strings Associate the deployment target with tenants --tenant-tag strings Associate the deployment target with tenant tags, should be in the format 'tag set name/tag name' --tenanted-mode string Choose the kind of deployments where this deployment target should be included. Default is 'untenanted' -w, --web Open in web browser --worker-pool string The worker pool for the deployment target, only required if not using the default worker pool Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target kubernetes create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target kubernetes list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-kubernetes-list.md List Kubernetes deployment targets in Octopus Deploy ```text Usage: octopus deployment-target kubernetes list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target kubernetes list octopus deployment-target kubernetes ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Private cloud migration Source: https://octopus.com/docs/administration/private-cloud-migration.md Teams looking to migrate their on-premises Octopus instance to private cloud hosting must take multiple factors into account to ensure a smooth migration. This document provides a guide to ensure teams successfully migrate their instance with minimal interruption to their deployments. ## Checklist There are a number of factors to consider when migrating an on-premises instance to cloud hosting: * What version of Octopus are you running? * Who is using the on-premises Octopus instance? * How long can Octopus be offline for before it interrupts critical operations? * Who can test each project to ensure it deploys correctly after the migration? * Do you require a continuous audit history after the migration? * How do users authenticate with Octopus? * Where have tentacles been installed? * What kind of tentacles have been configured (polling or listening)? * Are there any firewall rules restricting traffic to tentacles? * Do you want to host Octopus in a Linux container on a platform like Kubernetes or ECS? * Do you have a direct network connection from your cloud provider to your on-premises infrastructure? * Where are your packages stored? * Do you have any CI servers integrated with Octopus? * Do you have any subscriptions configured in Octopus? * Do you have any external tools or scripts that call the Octopus API? * Do you have external scripts or CI servers using API keys? ### What version of Octopus are you running? Some of the migration options require a relatively recent version of Octopus to be installed. For this reason the first step in any migration is to update the on-premises instance to the latest version of Octopus. ### Who is using the on-premises Octopus instance? Most teams looking to move their on-premises Octopus instance to a private cloud tend to have multiple teams deploying multiple projects through Octopus. How the migration is performed is largely dictated by the requirements of each of the impacted teams. It is therefore critical to understand which teams are using Octopus and what applications they are deploying. Having this information allows you to answer the next questions. ### How long can Octopus be offline for before it interrupts critical operations? Some teams may deploy relatively infrequently, perhaps once per month. Other teams may perform multiple deployments per day. Understanding the window in which Octopus can be unavailable without interrupting critical operations is a large factor in determining the migration path. ### Who can test each project to ensure it deploys correctly after the migration? Once the migration is complete, each team using Octopus must ensure that their deployments continue to work correctly. Either a member of each team using Octopus must test the migrated instance to ensure their projects work correctly, or the team performing the migration must be provided with guidance on how to test the migrated projects. The success or failure of these tests determines if the migration must be rolled back or can continue. ### Do you require a continuous audit history after the migration? The requirement to have a complete audit history available in the migrated instance will limit the migration paths you can take. For example, the [Import/Export feature](/docs/projects/export-import) does not copy audit logs from the on-premises Octopus instance to the new cloud hosted instance. ### How do users authenticate with Octopus? Octopus can either maintain users and teams in its own internal database or delegate authentication to an external provider such as Active Directory. Whether users and teams managed by Octopus are migrated or manually recreated depends on the migration path. Also be aware that Octopus hosted in a [Linux container](/docs/installation/octopus-server-linux-container) has some limitations with the supported authentication providers compared to the Windows version. ### Where have tentacles been installed? All tentacles will need to connect to or receive connections from the new Octopus instance. Understanding where tentacles are installed allows you to answer the next questions. ### What kind of tentacles have been configured (polling or listening)? Octopus tentacles can be configured in either listening or polling mode. Listening tentacles expose an open network port that the Octopus server uses to establish in inbound connection to in order to initiate a deployment. Listening tentacles rely on the certificate presented by the Octopus server to trust inbound connections. This means listening tentacles must re-register with fresh Octopus instances in order to trust the inbound connections, while Octopus instances configured against a restored database retain the certificates and can establish connections to listening tentacles. Polling tentacles establish an outbound connect to the Octopus server to poll it for any pending deployments. Polling tentacles can be configured to poll multiple Octopus instances, allowing a single polling tentacle to be shared by many Octopus instances. ### Are there any firewall rules restricting traffic to tentacles? Machines hosting tentacles may have firewall rules that limit incoming and outgoing traffic. If these firewall rules exist, they must be updated to allow traffic to and from the migrated Octopus instance. ### Do you want to host Octopus in a Linux container on a platform like Kubernetes or ECS? Octopus was initially provided only as a Windows application. Today Octopus is also provided as a Linux OCI image. The hosted platform provided by Octopus runs the Linux OCI image in Kubernetes, so running Octopus as a Linux container is a well tested and supported solution. Teams may wish to migrate to the Linux version of Octopus when moving to the cloud. There are many benefits to doing so, including cheaper hosting costs and the option to host Octopus on platforms like Kubernetes or ECS. The Windows and Linux versions are mostly identical. However, there are some caveats to be aware of as documented [here](/docs/installation/octopus-server-linux-container). ### Do you have a direct network connection from your cloud provider to your on-premises infrastructure? Most cloud providers provide the ability to link on-premises networks to cloud based networks such that they both appear to belong to the same contiguous network. This allows the cloud based Octopus instance to continue to communicate with on-premises tentacles and targets with the same host names used by the on-premises Octopus instance. However, if the cloud network is separate or otherwise segregated from the on-premises network, you may be required to switch from listening tentacles to polling tentacles, as polling tentacles can establish a secure outbound network connection over the public internet. ### Where are your packages stored? If you use an external package repository, both the on-premises and cloud hosted Octopus instances can continue to consume the same set of packages. However, if you use the built-in Octopus feed, the packages must be manually copied to the new cloud hosted Octopus instance. In addition, any CI servers pushing packages to the Octopus instance must be updated to push packages to the cloud Octopus instance. A sample script has been provided in the [Import/Export documentation](/docs/projects/export-import) to automate the process of copying packages. ### Do you have any CI servers integrated with Octopus? A typical deployment workflow has a CI server which builds deployment artifacts, pushes those artifacts to a package repository, and then initiates a deployment to a development environment in Octopus. The CI server must be updated to point to the new cloud hosted Octopus instance so any packages pushed to the Octopus built-in feed and plugins that create and deploy releases interact with the new instance. ### Do you have any subscriptions configured in Octopus? Subscriptions are web hooks called by Octopus in response to certain events. Any service configured to respond to subscription events must have a network connection to the cloud hosted instance. And, depending on the migration path used, the subscriptions may need to be manually recreated. ### Do you have any external tools or scripts that call the Octopus API? Octopus has a rich API that we encourage teams to use for advanced scenarios and management tasks. Any scripts written against the on-premises instance must be pointed to the cloud hosted instance. ### Do you have external scripts or CI servers using API keys? API keys are the primary means with which external systems and scripts authenticate with the Octopus API. Depending on the migration path, these API keys may need to be regenerated. ## Migration paths There are three main paths available when migrating Octopus to a new instance: incremental, complete, and double complete. All have advantages and disadvantages. Which migration path you select is determined by the answers to the questions above. ### Complete migration A complete migration involves: 1. Placing the on-premises Octopus instance into maintenance mode. 1. Ensuring you have the master key. 1. Performing a full backup of the on-premises database. 1. Restoring the backup into the cloud based database. 1. Copying task logs to the cloud based file storage. 1. Copying built-in feed packages to the cloud based file storage. 1. Copying artifacts to the cloud based file storage. 1. Copying archived events to the cloud based file storage. 1. Installing Octopus on your chosen hosting platform (e.g. a virtual machine or container orchestration platform). 1. Pointing the cloud Octopus instance to the cloud based database. 1. Reindexing the packages in the built-in feed. 1. If the cloud Octopus instance has a new DNS name: 1. Reregistering polling tentacles to point to the cloud instance. 1. Pointing CI servers and external scripts to the cloud instance. 1. Updating firewall rules to allow the cloud instance to connect to listening tentacles. This process is documented in more detail under [Moving your Octopus components to other servers](/docs/administration/managing-infrastructure/moving-your-octopus). Choose a complete migration when: * There are few projects to test, or teams can collectively sign off the migration relatively quickly. * You require a complete audit history to be present on the cloud Octopus instance. * You have a large number of listening tentacles, as the cloud Octopus instance retains the certificates required to establish the inbound connections, allowing the existing listening tentacles to be reused without reregistering them. * You have a large number of API keys in use and do not wish to regenerate them. * You have a large number of subscriptions configured and you do not wish to reconfigure them. A complete migration may not suitable when: * There are many projects to migrate, and any project may take longer to validate than the downtime tolerated by any other team, as a complete migration assumes everyone can start using the new instance relatively quickly. ### Incremental migration An incremental migration involves: 1. Installing the cloud hosted Octopus instance with a fresh database. 1. Using the [Import/Export feature](/docs/projects/export-import) to move individual projects to the cloud hosted instance. 1. Reregistering tentacles required by the imported project with the cloud hosted instance. 1. Copying packages used by the migrated project to the cloud hosted built-in feed. 1. Reindexing the built-in feed. 1. Updating CI servers and external scripts to point to the cloud instance. 1. Disabling the project on the on-premises instance after migration. Choose an incremental migration when: * Teams can only tolerate small downtime windows, as an incremental migration allows the on-premises instance to continue performing deployments as each team or project is migrated individually. * You have so many CI projects or external scripts interacting with Octopus that it is not feasible to migrate them all at once, as an incremental migration means you migrate only the external services relating to the single Octopus project or team being migrated. * You wish to limit the migration risk by limiting each migration step to a single project or team. An incremental migration may not suitable when: * You require the complete audit history to be present on the cloud instance, as the export/import feature does not migrate audit events. * You have a large number of Config-as-Code enabled projects, as the export/import feature does not export these projects. * You do not wish to reregister listening tentacles, as the new cloud instance has new certificates and will not be able to establish a connection to existing listening tentacles. * You have a large number of project triggers, as the export/import feature does not export triggers. * You have a large number of users and teams in the internal Octopus database, as these will have to be manually recreated. * You have a large number of active API keys and do not wish to regenerate them. ### Double complete migration A third option is to perform a complete migration but then treat the cloud instance as disposable. Once testing has been completed, the cloud instance is destroyed and a new complete migration is performed. This allows teams to switch to the cloud instance with a high degree of certainty that their projects will deploy correctly, while also ensuring that the cloud instance has all the recent configuration from the on-premises instance. Choose a double complete migration when: * You need all the features of a complete migration. * You are unable to validate a complete migration within the outage window tolerated by your teams. A double complete migration may not suitable when: * You are unable to migrate all external services fast enough to satisfy the outage window tolerated by your teams. ## Conclusion There are many factors to consider when migrating an on-premises Octopus instance to the cloud. The technical requirements to perform such a migration are usually the easiest to implement. It is typically the functional requirements of the teams using Octopus that require the most careful consideration. Every migration will have unique requirements, but this document highlights a number of common factors that must be taken into consideration when performing a cloud migration. # Reference architectures Source: https://octopus.com/docs/getting-started/reference-architectures.md Deployments are more than the sum of their parts, with well architected deployment processes empowering DevOps teams to release and maintain high quality software at a high velocity. The reference architecture steps provided by the [community step template library](/docs/projects/community-step-templates) allow DevOps teams to quickly populate an existing Octopus space with examples of well architected deployment projects, complete with all the supporting resources like environments, feeds, accounts, lifecycles etc. ## Common prerequisites The reference architecture steps are typically run from a runbook. The runbook requires a small number of external resources to be defined: 1. A `Docker Container Registry` [feed](/docs/packaging-applications/package-repositories/guides/container-registries/docker-hub) called `Container Images` with the URL `https://index.docker.io`. This feed is used to access the [execution container for workers](/docs/projects/steps/execution-containers-for-workers) exposing a recent version of Terraform. 2. An [environment](/docs/infrastructure/environments) to execute runbooks in. This documentation assumes the environment is called `Admin`. 3. A [project](/docs/projects) to hold the runbooks. This documentation assumes the project is called `Reference Architecture`. ## Reference architecture steps - [AWS EKS](/docs/getting-started/reference-architectures/eks-reference-architecture) - [Azure Web Apps](/docs/getting-started/reference-architectures/webapp-reference-architecture) # Agent vs Agentless Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/agent-vs-agentless.md Generally speaking, the two options for communicating with remote machines are agent-based and agentless. Agent-based relies on an agent installed on the target, such as the [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle). Agentless is a misnomer, as there is an agent pre-installed on the machine, specifically SSH for Linux machines and [Windows Remote Management (WinRM)](https://learn.microsoft.com/en-us/windows/win32/winrm/portal) for Windows. At Octopus, we prefer and recommend agent-based, using Octopus Tentacles. In this document, we compare the two approaches. :::div{.hint} Octopus supports both agent-based and agentless communications for Linux, via Tentacle and SSH, respectively. For Windows, Tentacle is required. WinRM is not supported. ::: ## Connectivity Model
      Tentacle Tentacle can operate in Listening or Polling communication modes. This avoids firewall headaches by allowing outbound-only connections from the targets.
      SSH Inbound over port 22. This is standard in most Linux environments.
      WinRM Inbound over ports 5985 (HTTP) or 5986 (HTTPS).
      ## Authentication and Security
      Tentacle Mutual X.509 certificate authentication. Both the Octopus Server and the Tentacle generate their own X.509 certificates when they’re installed. These are exchanged during the initial “trust” setup (the handshake). After that, each side verifies the other using the certificates before allowing communication. All communication between the Octopus Server and Tentacle is encrypted using TLS. There is no reliance on domain-trust or OS accounts.
      SSH Uses the SSH protocol with public-key cryptography. Octopus Deploy proves identity with a configured key, either: - SSH private key - Username + password
      WinRM For encryption, it typically uses HTTPS with TLS. Uses Windows authentication models: - Kerberos - NTLM - Basic authentication
      ## Installation and Configuration
      Tentacle Requires installing a lightweight service (Windows or Linux). Upgrades can be automated from Octopus.
      SSH No additional agent required. Requires correct system configuration and credential management.
      WinRM No additional agent required. Requires correct system configuration and credential management.
      ## Summary Using the Tentacle agent comes with the upfront cost of installing the service on target machines, but this is offset by advantages, including: - A more flexible connectivity model that supports both listening and polling modes. - Strong security independent of domain-trust or OS credentials, making it less likely to be misconfigured. # Clustered Listening Tentacles Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles.md You can configure a pair of Octopus Tentacles in an active/passive failover cluster on shared storage with the **Failover Cluster Manager**. You may need to do this if you're running an application in a failover cluster and would like to use Octopus Deploy to deploy your application to it irrespective of the fail-over state. In this scenario your Octopus Server will always be communicating with the Octopus Tentacle that is installed on the active node within the failover cluster. This approach has been tested on Windows Server 2016. :::div{.warning} **Shared storage Considerations** It is not possible to store the `tentacle.config` file in shared storage because the Tentacle.Certificate component gets encrypted using a node's machine-specific key. If you attempt to store the config file in shared storage, you will encounter the error: `Key not valid for use in specified state` upon switching to a new active node. This occurs because the new active node is not able to decrypt the private key resulting in the Tentacle service failing to start. ::: ## Requirements {#ClusteringTentacles-Requirements} This guide assumes you already have the following setup: - An Active Directory domain and a local DNS server. - An Octopus Server (this does not need to be joined to the domain). - A two node active/passive Windows cluster where each node is joined to the domain. - A local IP address available for the cluster. - Shared storage configured for the cluster (in this example we are using `E:\`) ## Installation {#ClusteringTentacles-Installation} :::div{.warning} **Installing Tentacles with shared storage** This guide implements shared storage using an iSCSI target with Multipath IO configured on the Tentacle servers, in this scenario it is best to avoid accessing the same iSCSI volume from two different hosts at the same time as doing so may result in corrupt data on the iSCSI volume. ::: On the first node, check that the shared drive is mounted and note down the drive letter. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/shared-disk-properties.jpg) ::: Run through the Tentacle MSI Installer to install Tentacle Manager to its default location `C:\Program Files\Octopus Deploy\Tentacle`. Do not click "get started" in the Tentacle manager; instead install the Octopus Tentacle instance using the command prompt as an Administrator by opening `cmd` then run these commands (replacing relevant values as appropriate): ```batch cd "C:\Program Files\Octopus Deploy\Tentacle\" Tentacle.exe create-instance --instance "Tentacle" --config "C:\Octopus\Tentacle.config" Tentacle.exe configure --instance "Tentacle" --app ":\Octopus\Applications" --home ":\Octopus\Home" --port "10933" --noListen "False" Tentacle.exe configure --instance "Tentacle" --trust "" "netsh" advfirewall firewall add rule "name=Octopus Deploy Tentacle" dir=in action=allow protocol=TCP localport=10933 Tentacle.exe service --instance "Tentacle" --install --stop --start ``` In the script, we: - Installed the Tentacle instance using the default instance name of `Tentacle` and made sure the `Tentacle.config` file was installed into the default location of `C:\Octopus\Tentacle.config`. - Configured the new instance to listen on TCP Port `10933` and set the Application and Home directories to our shared storage drive. - Configured the Tentacle to trust the Octopus Server holding a certificate which matches the specified certificate thumbprint. - Ensured the Windows Firewall has a rule configured to allow incoming connections on TCP Port `10933`, allowing the Octopus Server to talk to our new Tentacle. Using the Tentacle Manager stop the Octopus Tentacle service which was just installed on the first node and take the shared disk offline in Windows Disk Management. Now go to the second Tentacle server in the active/passive cluster and bring the same disk online, repeating the steps which were just performed on the first node to install the Octopus Tentacle service. Please keep the Octopus Tentacle service started and ensure that the shared storage is still mounted this time so that the .pfx file can be exported out of Octopus Tentacle. ## Generate an Octopus Tentacle PFX file Open `cmd` as Administrator again and run these commands to generate a new Private Key from an Octopus Deploy Tentacle (replacing relevant values as appropriate). ```batch cd "C:\Program Files\Octopus Deploy\Tentacle\" Tentacle.exe new-certificate --export-pfx=":\TentacleClusterPrivateKey.pfx" --pfx-password="YOUR-PFX-PASSWORD" ``` ## Import the new Octopus Tentacle PFX file Now import the new pfx file into the server from which it was just generated. ```batch Tentacle.exe import-certificate --instance="Tentacle" --from-file=":\TentacleClusterPrivateKey.pfx" --pfx-password="YOUR-PFX-PASSWORD" Tentacle.exe service --instance="Tentacle" --stop --start ``` Now on the second node, stop the Tentacle service and bring the shared storage offline the same way you have done before. This time go back to the first node in the cluster, bringing the shared storage drive back online and start the Tentacle service. Then, import the pfx file into the first node to ensure both nodes in the cluster hold the same private key. Once both Tentacles are installed and configured ensure that neither node has the Octopus Tentacle started and that the shared storage is brought offline. ## Configure a new clustered service {#ClusteringTentacles-NewCluster} Ensure each node that will be participating in the Tentacle Cluster is joined to the Active Directory Domain and has the **Failover Clustering** feature installed in Windows. For more information on installing the Failover Clustering feature in Windows please see the [Microsoft Failover Clustering documentation](https://blogs.msdn.microsoft.com/clustering/2012/04/06/installing-the-failover-cluster-feature-and-tools-in-windows-server-2012/ "installing the failover cluster service feature and tools in Windows Server 2012"). Open the **Failover Cluster Manager** console on one of the nodes. If there is no cluster configured yet, you can right-click **Failover Cluster Manager** and select **New Cluster** On the **Select Servers** page, enter the Fully Qualified Domain Name of each node that will be in this cluster. On the **Validation Warning** page, select `Yes, When I click Next, run configuration validation tests, and then return to the process of creating the cluster`. When the **Validate a Configuration Wizard** appears, select `Run all Tests` and select **Next**. After all validation processes successfully, you will be returned to the **Create Cluster Wizard** where the **Access Point for Administering the Cluster** page appears. At this point, choose an IP Address that is on the same Network as both Tentacles and a hostname that is 15 characters or less. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/configure-clusterhostname.jpg) ::: :::div{.warning} **Access point for administering the cluster** The IP address which you specify here is not going to be used by Octopus Server, instead this is used for administering the cluster. ::: Now complete the wizard. ## Adding Octopus Tentacle as a generic service cluster {#ClusteringTentacles-AddTentacleCluster} Right-Click **Roles** and select **Configure Roles** then highlight **Generic Service**, click **Next**. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/cluster-newrolewizard-servicetype.jpg) ::: Find and highlight the **OctopusDeploy Tentacle** service in the list of available services, then click **Next**. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/cluster-newrolewizard-selectservice.jpg) ::: Under **Client Access Point** choose an appropriate NetBIOS name and IP address for this clustered role. Note down this IP address/DNS hostname, you will need it to add the Tentacle Cluster to your Octopus Server. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/cluster-newrolewizard-clientaccess.jpg) ::: Under **Select Storage**, choose the disk that is configured as shared storage. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/cluster-newrolewizard-storage.jpg) ::: Under **Replication Registry Settings** add a new root registry key of "Software\Octopus" and complete the wizard. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/cluster-newrolewizard-key.jpg) ::: Complete the wizard, then navigate to the roles view to ensure the Tentacle service is `running`. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/cluster-complete.jpg) ::: ## Connect Octopus Server to a clustered Tentacle {#ClusteringTentacles-ConnectOctopusServer} Log into the Octopus Portal and go to the **environments** page. Under the desired environment, click **Add Deployment Target**. For the target type, choose **Listening Tentacle**. For the hostname, enter the IP or DNS hostname you noted down earlier, then click "Discover". :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/server-discovertentacle.jpg) ::: Type the display name in Octopus Deploy and give your new target a role. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/server-identifytarget.jpg) ::: In a few minutes your new Tentacle cluster will appear as healthy in the Octopus Server. :::figure ![](/docs/img/infrastructure/deployment-targets/tentacle/windows/clustered-listening-tentacles/images/server-targethealthy.jpg) ::: Congratulations! You have successfully configured an active/passive server cluster using Octopus Tentacles. # octopus deployment-target kubernetes view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-kubernetes-view.md View a Kubernetes deployment target in Octopus Deploy ```text Usage: octopus deployment-target kubernetes view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target kubernetes view 'target-name' ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # GitHub Actions Source: https://octopus.com/docs/packaging-applications/build-servers/github-actions.md Use [GitHub Actions](https://docs.github.com/en/actions/about-github-actions/understanding-github-actions) to orchestrate Octopus from your CI pipeline for a seamless CI/CD workflow. Integrating GitHub Actions with Octopus Deploy allows you to trigger events in Octopus (like creating a Release) based on events in GitHub (like pushing to main) for an effortless transition from CI to CD. ## Octopus Deploy Actions Octopus Deploy provides GitHub Actions which enable you to: - [Log into Octopus Deploy](https://github.com/marketplace/actions/login-to-octopus-deploy) - [Install Octopus CLI](https://github.com/marketplace/actions/install-octopus-cli) - [Create a Release](https://github.com/marketplace/actions/create-release-in-octopus-deploy) - [Deploy a Release](https://github.com/marketplace/actions/deploy-a-release-in-octopus-deploy) - [Deploy a Tenanted Release](https://github.com/marketplace/actions/deploy-a-tenanted-release-in-octopus-deploy) - [Run a Runbook](https://github.com/marketplace/actions/run-runbook-in-octopus-deploy) - [Push Build Information](https://github.com/marketplace/actions/push-build-information-to-octopus-deploy) - [Create a Zip Package](https://github.com/marketplace/actions/create-zip-package-for-octopus-deploy) - [Create a NuGet Package](https://github.com/marketplace/actions/create-nuget-package-for-octopus-deploy) - [Push Packages to Octopus Deploy](https://github.com/marketplace/actions/push-package-to-octopus-deploy) - [Create an Ephemeral Environment](https://github.com/marketplace/actions/create-an-ephemeral-environment-in-octopus-deploy) - [Deprovision an Ephemeral Environment](https://github.com/marketplace/actions/deprovision-an-ephemeral-environment-in-octopus-deploy) - [Wait for/ watch an Execution Task](https://github.com/marketplace/actions/wait-watch-an-execution-task-in-octopus-deploy) ## Getting started Octopus Deploy GitHub Actions can be easily incorporated into your own GitHub Action workflows by including them as steps in your workflow YAML. Here is a simple workflow YAML to get you started. ### Example workflow - Create and deploy a release ```yaml # .github/workflows/hello-octopus-deploy.yml name: Hello Octopus Deploy on: workflow_dispatch: jobs: octopus-deployment: runs-on: ubuntu-latest permissions: id-token: write # Required by login action env: OCTOPUS_SPACE: 'Outer Space' # Supply the following values if not using the login action: # OCTOPUS_API_KEY: ${{ secrets.API_KEY }} # OCTOPUS_URL: ${{ secrets.SERVER }} steps: - name: Checkout repository uses: actions/checkout@v3 # Your own build steps go here! - name: Your build ✨ run: echo "Your build steps!" # Action to log into Octopus Deploy - name: Log into Octopus Deploy 🐙 uses: OctopusDeploy/login@v1 with: server: https://acme.octopus.app service_account_id: a1a1a1a1-b2b2-c3c3-d4d4-e5e5e5e5e5e5 # Action to Create a Release - name: Create a release in Octopus Deploy 🐙 uses: OctopusDeploy/create-release-action@v3 with: project: 'MyProject' release_number: '1.0.0' git_ref: ${{ github.ref }} git_commit: ${{ github.sha }} # Action to Deploy a Release - name: Deploy a release in Octopus Deploy 🐙 uses: OctopusDeploy/deploy-release-action@v3 with: project: 'MyProject' release_number: '1.0.0' environments: | Dev Test variables: | Flip: Bling Fizz: Buzz ``` ### ✍️ Environment variables | Name | Description | | :---------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `OCTOPUS_SPACE` | The Name of the Space where this command will be executed. | | `OCTOPUS_URL` | The base URL hosting Octopus Deploy (i.e. `https://octopus.example.app`). It is strongly recommended that this value be retrieved from a [GitHub secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). | | `OCTOPUS_API_KEY` | The API key used to access Octopus Deploy. It is strongly recommended that this value be retrieved from a [GitHub secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). | ### 📥 Inputs | Name | Description | | :-------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `project` | The name of the Project associated with this Release. | | `release_number` | The number for the new Release. If omitted, Octopus Deploy will generate a Release number. | | `environments` | A list of Environments in Octopus Deploy in which to run (i.e. Dev, Test, Prod). Add each environment on a new line. | | `variables` | A list of Variables to use in the Deployment in `key: value` format. Add each variable on a new line. | | `git_ref` | The Git branch from which to source the project code. Required for Projects using version control in Octopus. The example above sources this value from the workflow's [contextual information.](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#github-context) | | `git_commit` | The Git commit from which to source the project code. Required for Projects using version control in Octopus. The example above sources this value from the workflow's [contextual information.](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#github-context) | | `server` | The base URL hosting Octopus Deploy (i.e. `https://octopus.example.app`). It is strongly recommended that this value be retrieved from a [GitHub secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). | | `service_account_id` | The id of the OIDC service account you wish to login as. Service accounts can be viewed and created on the Octopus app under 'Users' on the configuration menu. | ## Handling packages To help you package your files for deployment, Octopus Deploy provides actions to [Create a Zip Package](https://github.com/marketplace/actions/create-zip-package-for-octopus-deploy) or [Create a NuGet Package](https://github.com/marketplace/actions/create-nuget-package-for-octopus-deploy). Alternatively, you can [Install the Octopus CLI](https://github.com/marketplace/actions/install-octopus-cli) and create packages using the [pack command](https://octopus.com/docs/octopus-rest-api/octopus-cli/pack). Once your packages are created, simply push them to the Octopus Server built-in repository using our [Push Packages](https://github.com/marketplace/actions/push-package-to-octopus-deploy) Octopus Action. You can confirm that your packages have been successfully added by checking for them in your Space under 'Packages'. Here is a simple example of how to create, push and use a Zip package in a Release. ### Example workflow - Working with packages ```yaml # .github/workflows/hello-octopus-packages.yml name: Hello Octopus Packages on: workflow_dispatch: jobs: octopus-packages: runs-on: ubuntu-latest permissions: id-token: write # Required by login action env: OCTOPUS_SPACE: 'Outer Space' steps: - name: Checkout repository uses: actions/checkout@v3 # Action to log into Octopus Deploy - name: Log into Octopus Deploy 🐙 uses: OctopusDeploy/login@v1 with: server: ${{ secrets.SERVER }} service_account_id: a1a1a1a1-b2b2-c3c3-d4d4-e5e5e5e5e5e5 # Action to Create a Zip Package - name: Create a Zip package 🐙 uses: OctopusDeploy/create-zip-package-action@v3 with: package_id: 'HelloPackage' version: '1.0.0' output_folder: './packages/' base_path: './src/files-to-package/' files: | **/*.* # Action to Push Packages to Octopus Deploy - name: Push a package to Octopus Deploy 🐙 uses: OctopusDeploy/push-package-action@v3 with: packages: | packages/**/*.zip # Using your Package in a Release - name: Use the Package in a Release 🎉 uses: OctopusDeploy/create-release-action@v3 with: project: 'MyProject' release_number: '1.0.0' git_ref: ${{ github.ref }} git_commit: ${{ github.sha }} packages: | HelloPackage:1.0.0 ```

      📥 Additional inputs

      | Name | Description | | :-------------------- | :----------------------------------------------------------------------------------------------------------------| | `package_id` | The name of the package. | | `version` | The version of the package. | | `output_folder` | The folder to put the resulting package in, relative to the current working directory. | | `base_path` | The path to the folder containing the files to be used in the package | | `files` | A list of files to be included in the package relative to the base path. Add each item on a new line. | | `packages` | Used by the Push Packages action. A list of packages to push to Octopus Deploy. Add each item on a new line. | | `packages` | Used by the Create Release action. A list of packages to be used in the Release. Add each item on a new line. | ## Handling build information :::div{.info} When using build information in release notes in conjunction with [built-in package repository triggers (formerly known as _Automatic Release Creation_)](https://octopus.com/docs/projects/project-triggers/built-in-package-repository-triggers) the build information **must** be pushed to Octopus **before** the packages are pushed to Octopus as the release will be created as soon as the package configured for automatic release create is pushed. ::: ### Example workflow - Working with packages and build information ```yaml # .github/workflows/hello-octopus-packages-and-build-information.yml name: Hello Octopus Packages and Build Information on: workflow_dispatch: jobs: octopus-packages: runs-on: ubuntu-latest permissions: id-token: write # Required by login action env: OCTOPUS_SPACE: 'Outer Space' steps: - name: Checkout repository uses: actions/checkout@v3 # Action to log into Octopus Deploy - name: Log into Octopus Deploy 🐙 uses: OctopusDeploy/login@v1 with: server: ${{ secrets.SERVER }} service_account_id: a1a1a1a1-b2b2-c3c3-d4d4-e5e5e5e5e5e5 # Action to Create a Zip Package - name: Create a Zip package 🐙 uses: OctopusDeploy/create-zip-package-action@v3 with: package_id: 'HelloPackage' version: '1.0.0' output_folder: './packages/' base_path: './src/files-to-package/' files: | **/*.* # Action to Push Packages to Octopus Deploy - name: Push a package to Octopus Deploy 🐙 uses: OctopusDeploy/push-package-action@v3 with: packages: | packages/**/*.zip # Action to Push Build Information to Octopus Deploy - name: Push build information to Octopus Deploy 🐙 uses: OctopusDeploy/push-build-information-action@v3 with: packages: | HelloPackage version: '1.0.0' # Using your Package in a Release - name: Use the Package in a Release 🎉 uses: OctopusDeploy/create-release-action@v3 with: project: 'MyProject' release_number: '1.0.0' git_ref: ${{ github.ref }} git_commit: ${{ github.sha }} packages: | HelloPackage:1.0.0 ```

      📥 Additional inputs

      | Name | Description | | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------| | `package_id` | The name of the package. | | `version` | Used by the Create a Zip Package action. The version of the package. | | `output_folder` | The folder to put the resulting package in, relative to the current working directory. | | `base_path` | The path to the folder containing the files to be used in the package | | `files` | A list of files to be included in the package relative to the base path. Add each item on a new line. | | `packages` | Used by the Push Packages action. A list of packages to push to Octopus Deploy. Add each item on a new line. | | `packages` | Used by the Push Build Information action. A list of packages to push to Octopus Deploy. Add each item on a new line. | | `version` | Used by the Push Build Information action. The version of the package. | | `packages` | Used by the Create Release action. A list of packages to be used in the Release. Add each item on a new line. | ## Runners Octopus Deploy GitHub Actions can be run on every available type of [runner](https://docs.github.com/en/actions/about-github-actions/understanding-github-actions#runners) (Ubuntu Linux, Microsoft Windows, macOS, and Self-Hosted). If your Octopus Server is not accessible over the internet, you can connect to it using a [Self-Hosted runner](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). ## Sequencing tasks It can be useful to run multiple Octopus Deploy GitHub Actions in sequence as part of a workflow. To do this, simply include each Octopus Action as a step within a single job. If you need to run sequential actions in separate jobs, you can also configure your jobs to run sequentially by [defining prerequisite jobs](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/using-jobs-in-a-workflow#defining-prerequisite-jobs). ## Previous versions Since the release of v3, Octopus Deploy GitHub Actions no longer need the [Install Octopus CLI](https://github.com/marketplace/actions/install-octopus-cli) package to be installed before running. [Each Octopus Action](#octopus-deploy-actions) introduced before v3 provides a guide to migrating to v3. # Built-in Worker Source: https://octopus.com/docs/security/built-in-worker.md Octopus Server comes with a built-in worker which enables you to conveniently run parts of your deployment process on the Octopus Server without the need to install a Tentacle or other deployment target. This is very convenient when you are getting started with Octopus Deploy, but it does come with several security implications. ## Default configuration By default, Octopus Server runs as the highly privileged **Local System** account on Windows. We typically recommend running Octopus Server as a different account, either a User or Managed Service Account (MSA), so you can grant specific privileges to that account. When you first install Octopus Server the built-in worker is configured to run using the **same user account as the Octopus Server itself**. This means your deployment process can do the same things the Octopus Server can do. ## Secure configuration We highly recommend running the built-in worker in a different, lower privileged, security context. Alternatively you can disable the built-in worker and delegate that work to an external worker perhaps on another server or in an entirely different network zone. Learn about [workers](/docs/infrastructure/workers) and the different options you have for securing them. # octopus deployment-target list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-list.md List deployment targets in Octopus Deploy ```text Usage: octopus deployment-target list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target list octopus deployment-target ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target listening-tentacle Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-listening-tentacle.md Manage Listening Tentacle deployment targets in Octopus Deploy ```text Usage: octopus deployment-target listening-tentacle [command] Available Commands: create Create a Listening Tentacle deployment target help Help about any command list List Listening Tentacle deployment targets view View a Listening Tentacle deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target listening-tentacle [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target listening-tentacle list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target listening-tentacle create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-listening-tentacle-create.md Create a Listening Tentacle deployment target in Octopus Deploy ```text Usage: octopus deployment-target listening-tentacle create [flags] Aliases: create, new Flags: --environment strings Choose at least one environment for the deployment target. --machine-policy string The machine policy for the deployment target to use, only required if not using the Default Machine Policy -n, --name string A short, memorable, unique name for this Listening Tentacle. --proxy string Select whether to use a proxy to connect to this Listening Tentacle. If omitted, will connect directly. --role strings Choose at least one role that this deployment target will provide (use --tag for tag sets with validation). --tag strings Target tags in canonical format (TagSetName/TagName). --tenant strings Associate the deployment target with tenants --tenant-tag strings Associate the deployment target with tenant tags, should be in the format 'tag set name/tag name' --tenanted-mode string Choose the kind of deployments where this deployment target should be included. Default is 'untenanted' --thumbprint string The X509 certificate thumbprint that securely identifies the Tentacle. --url string The network address at which the Tentacle can be reached. -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target listening-tentacle create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target listening-tentacle list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-listening-tentacle-list.md List Listening Tentacle deployment targets in Octopus Deploy ```text Usage: octopus deployment-target listening-tentacle list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target listening-tentacle list octopus deployment-target listening-tentacle ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Troubleshooting Source: https://octopus.com/docs/administration/high-availability/troubleshooting.md If you're running into issues with Octopus High Availability it's possible you are running into one of the problems listed here. ## Node license limits exceeded If you see the following licensing error, it means you have exceeded the number of active nodes: *"Unfortunately your license limits have been exceeded, and you will no longer be able to create or deploy releases. Your Octopus Deploy license only allows X active nodes. You currently have Y active nodes."* You can still log in to your Octopus instance. You are only restricted from creating or deploying releases. This may unintentionally occur if you have copied or moved your Octopus folders on your servers and you have multiple instances pointing to the same Octopus database. ### Instructions to remove unwanted nodes If you go to your nodes screen **Configuration ➜ Nodes**, you can delete the node(s) that are no longer applicable using the Delete option in the node's ... overflow menu. :::figure ![](/docs/img/administration/high-availability/troubleshooting/images/deleting-nodes.png) ::: ## Octopus Server starts and stops again Something has gone wrong and the Octopus Server has crashed. Look at the Octopus Server log to see what has gone wrong. You may see a message in the Octopus Server logs like this: ``` Could not find a part of the path 'Z:\Octopus\TaskLogs' ``` This usually means the drive `Z:\` has not been mapped for the user account running the Octopus Server, or the mapping has not been persisted across sessions. Drives are mounted per-user, so you need to create a persistent mapping for the user account the Octopus Server is running under. You may see a message in the Octopus Server logs like this: ``` Access to the path 'Z:\Octopus\TaskLogs' is denied ``` This usually means the user account running the Octopus Server does not have the correct permissions to the file share. Make sure this user account has full control over each of the folders. You may need to share permissions, and check the ACLs on the actual folders. ## Task logs are empty for certain deployments Sometimes you go to a deployment and there are no steps displayed, and detailed logs are not available for the deployment. Sometimes refreshing your browser fixes it and the logs come back. The cause for this is when you have not configured [shared storage](/docs/installation/file-storage) correctly. The most common situation is when you have configured each node to use a folder on a local disk instead of a shared network location. To fix this problem you should: 1. Plan some downtime for your Octopus HA cluster. 2. Create shared storage as [described here](/docs/installation/file-storage). 3. Put your Octopus HA cluster into [Maintenance Mode](/docs/administration/managing-infrastructure/maintenance-mode) after draining tasks from each node. 3. Reconfigure your Octopus HA cluster to use the shared storage. 4. Copy all files into the shared storage location - there shouldn't be any filename collisions since each node will generally run independent tasks. 5. Bring your Octopus HA cluster back online. ## Deployment artifacts are not available for certain deployments This has the same root cause as missing task logs - see above. ## Packages in the built-in repository are not available for some deployments This has the same root cause as missing task logs - see above. # Performance Source: https://octopus.com/docs/administration/managing-infrastructure/performance.md Over the years, we have built Octopus to enable reliable and repeatable deployments, but that doesn't necessarily mean it has to be slow. Octopus can scale with you as you grow. Some Octopus customers are reliably deploying hundreds of projects to many thousands of deployment targets from a single [Octopus High Availability](/docs/administration/high-availability) cluster. Octopus is a complex system where we control some parts of the deployment while offering you the freedom to inject your own custom steps into the process. We work hard to make our parts work quickly and efficiently, leaving as many resources available for running your parts of the deployment. We can't control the performance of your custom parts, but there are many things you can do as an Octopus administrator to ensure your installation operates efficiently. This page is intended to help Octopus System Administrators tune and maintain their Octopus installations and troubleshoot problems as they occur. :::div{.hint} Want to tune your deployments for optimum performance? Read our [detailed guide on optimizing your deployments](/docs/projects/deployment-process/performance). ::: ## Minimum requirements The size of your Octopus Deploy instance will be dependent on the number of users and concurrent tasks. A task includes (but not limited to): - Deployments - Runbook run - Retention Policies - Health Checks - Let's Encrypt - Process triggers - Process subscriptions - Script console run - Sync built-in package repository - Sync community library step-templates - Tentacle upgrade - Upgrade calamari - Active Directory sync A good starting point is: - Small teams/companies or customers doing a POC with 5-10 concurrent tasks: - 1 Octopus Server: 2 Cores / 4 GB of RAM - SQL Server Express: 2 Cores / 4 GB of RAM or Azure SQL with 25-50 DTUs - Small-Medium companies or customers doing a pilot with 5-20 concurrent tasks: - 1-2 Octopus Servers: 2 Cores / 4 GB of RAM each - SQL Server Standard or Enterprise: 2 Cores / 8 GB of RAM or Azure SQL with 50-100 DTUs - Large companies doing 20+ concurrent tasks: - 2+ Octopus Servers: 4 Cores / 8 GB of RAM each - SQL Server Standard or Enterprise: 4 Cores / 16 GB of RAM or Azure SQL with 200+ DTUs :::div{.hint} These suggestions are a baseline. Monitor your Octopus Server and SQL Server performance on all resources including CPU, memory, disk, and network, and increase resources when needed. ::: If you have a Server or Data Center license you can leverage [Octopus High Availability](/docs/administration/high-availability) to scale out your Octopus Deploy instance. With that option we recommend adding more nodes with 4 cores / 8 GB of RAM instead of increasing resources on one single node. Scaling vertically will only get you so far, at some point you run into underlying host limitations. ## Database [SQL Server](/docs/installation/sql-server-database) is the data persistence backbone of Octopus. Performance problems with your SQL Server will make Octopus run and feel slow and sluggish. ### Infrastructure It is possible to host Octopus Deploy and SQL Server on the same Windows Server. We only recommend you do that for Proof of Concepts or demos. Never for any Octopus Deploy instance used to deploy to Production. Keep Octopus Deploy and SQL Server on separate servers. This will keep them from competing for the same CPU, memory, disk and network resources. We've worked with customers who have large Production SQL Server instances with a lot of computing power (64 Cores and 512 GB of RAM). If you count yourself amongst those users, and one of those servers is not at capacity, then hosting Octopus Deploy on a shared server should be fine. If you don't count yourself amongst those users, then avoid hosting Octopus Deploy on a shared server. ### SQL Server maintenance \{#sql-maintenance} You should implement a routine maintenance plan for your Octopus database. Here is a [sure guide](https://oc.to/SQLServerMaintenanceGuide) (free e-book) for maintaining SQL Server. At the very least you should: - Rebuild all indexes **online** with fragmentation > 50% once a day during off-hours (typically 2-3 AM). - Rebuild all indexes **offline** with fragmentation > 50% once a week during off-hours (typically a Sunday morning). - Regenerate statistics once a month. :::div{.hint} Modern versions of Octopus Deploy automatically rebuild fragmented indexes during the upgrade process. If you frequently upgrade you might not notice high index fragmentation compared to someone who upgrades once a year. ::: ## Maintenance \{#maintenance} Routine maintenance can help your Octopus keep running at optimum performance and efficiency. ### Upgrade We are continually working to make Octopus perform better, and we will always recommend [upgrading to the latest version](/docs/administration/upgrading) whenever asked about performance. We generally tag [performance-related issues in our GitHub repository](https://github.com/OctopusDeploy/Issues/issues?q=label%3Afeature%2Fperformance) so you can see which performance improvements have been added in each version of Octopus. As an example, many customers have reported speed improvements of 50-90% for their deployments after upgrading from an early version of **Octopus 3.x** to the latest version. ### Retention policies Octopus are generally hygienic creatures, cleaning up after themselves, and your Octopus is no different. Configuration documents, like [projects](/docs/projects/) and [environments](/docs/infrastructure/environments/), are stored until you delete them, unlike historical documents like [releases](/docs/releases/). These will be cleaned up according to the [retention policies](/docs/administration/retention-policies) you configure. _The one exception to this is the `Events` table which records an [audit trail](/docs/security/users-and-teams/auditing) of every significant event in your Octopus._ A tighter retention policy means your Octopus Server will run faster across the board. :::div{.hint} **We need to keep everything for auditing purposes** You may not need to keep the entire history of releases - we record the entire history of your Octopus Server for [auditing](/docs/security/users-and-teams/auditing/) purposes. This means you can safely use a short-lived [retention policy](/docs/administration/retention-policies) to have a fast-running Octopus Server, all the while knowing your audit history is safely kept intact. The retention policy simply cleans up the "potential to deploy a release" - it does not erase the fact a release was created, nor the deployments of that release, from history. ::: ## Scaling Octopus Server \{#scaling} Octopus Servers do quite a lot of work during deployments, mostly around package acquisition: - Downloading packages from the package source (network-bound). - Verifying package hashes (CPU-bound). - Calculating deltas between packages for [delta compression](/docs/deployments/packages/delta-compression-for-package-transfers) (I/O-bound and CPU-bound). - Uploading packages to deployment targets (network-bound). - Monitoring deployment targets for job status, and collecting logs. At some point your server hardware is going to limit how many of these things a single Octopus Server can do concurrently. If a server over commits itself and hits these limits, timeouts (network or SQL connections) will begin to occur, and deployments can begin to fail. Above all else, your deployments should be repeatable and reliable. We offer four options for scaling your Octopus Server: - scale up by controlling the **task cap** and providing more server resources as required. - scale out using [Octopus High Availability](/docs/administration/high-availability). - scale out using [Workers](/docs/infrastructure/workers). - dividing up your Octopus environment using [Spaces](/docs/administration/spaces). ### Task cap An ideal situation would be an Octopus Server that's performing as many parallel deployments as it can, while staying just under these limits. We tried several techniques to throttle Octopus Server automatically. In practice, this kind of approach proved to be unreliable. Instead, we decided to put this control into your hands, allowing you to control how many tasks each Octopus Server node will execute concurrently. This way, you can measure server metrics for **your own deployments**, and then increase/decrease the task cap appropriately. Administrators can change the task cap in **Configuration ➜ Nodes**. See this [blog post](https://octopus.com/blog/running-task-cap-and-high-availability) for more details on why we chose this approach. The default task cap is set to `5` out of the box. Based on our load testing, this offered the best balance of throughput and stability for most scenarios. Increasing that to 10 should be fine without requiring more CPU or RAM. Anything more and we recommend [High Availability](/docs/administration/high-availability). The task cap also interacts with offloading deployment work to Workers. If you have more workers available, you might be able to increase your deployment performance and [different task cap or step parallelism](/docs/infrastructure/workers/#run-multiple-processes-on-workers-simultaneously) might be right with the extra ability to scale. ### Octopus High Availability You can scale out your Octopus Server by implementing a [High Availability](/docs/administration/high-availability) cluster. Each node in the cluster will have its own task cap. Two servers in a cluster each with a task cap of 5 means you can process 10 concurrent tasks. In addition to linearly increasing the performance of your cluster, you can perform certain kinds of maintenance on your Octopus Servers without incurring downtime. ### Workers Consider using [Workers](/docs/infrastructure/workers) and worker pools if deployment load is affecting your server. See this [blog post](https://octopus.com/blog/workers-performance) for a way to begin looking at workers for performance. ### Spaces Consider separating your teams/projects into "spaces" using the [Spaces](/docs/administration/spaces) feature. A space is considered a "hard wall". Each space has its own environments, projects, deployment targets, packages, machine policies, etc. The only thing shared is users and teams. That means less data for the Octopus UI to query. Splitting 60 projects evenly into 3 spaces will result in the dashboard only having to load 20 projects instead of 60. ## Tentacles Prefer [Listening Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended) or [SSH](/docs/infrastructure/deployment-targets/linux/ssh-target) instead of [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) wherever possible. Listening Tentacles and SSH place the Octopus Server under less load. We try to make Polling Tentacles as efficient as possible. However, they can place the Octopus Server under high load, just handling incoming connections. Reduce the frequency and complexity of automated health checks using [machine policies](/docs/infrastructure/deployment-targets/machine-policies). ## Packages \{#package-transfer} Transferring packages from your Octopus Server is a key piece of functionality when executing deployments and runbooks. This can also have an impact on your Octopus Server's performance. ### Network bandwidth The larger the package the more network bandwidth is required to transfer data to your deployment targets. Consider using [delta compression for package transfers](/docs/deployments/packages/delta-compression-for-package-transfers). Larger packages will require more CPU and disk IOPS to calculate the delta - monitor resource consumption to ensure delta compression doesn't negatively impact the rest of your Octopus Server. :::div{.hint} Delta compression doesn't always result in smaller package transfers. The algorithm will transfer the entire package if over a certain percentage changes. ::: If your packages have a lot of static data, consider creating a package containing only that static data and deploying it only when it changes. ### Custom package feed Consider using a custom package feed close to your deployment targets, and download the packages directly on the agent. This alleviates a lot of resource contention on the Octopus Server. ### Retention policy The built-in package feed has its own [retention policy](/docs/administration/retention-policies/#built-in-feed). Ensure that is enabled to keep the amount of packages to store and index down. :::div{.hint} The package retention policy only deletes packages not referenced by a release or runbook. Setting the retention policy to 1 day means the package will be deleted 1 day after the release is deleted. ::: ## File storage Octopus Deploy stores BLOB data (task logs, packages, project images, etc.) on the file system. ### Task logs \{#tip-task-logs} Larger task logs put the entire Octopus pipeline under more pressure. The task log has to be transferred from the Tentacle to the server, it has to be saved to the file system, and is read when you are on the deployment or runbook screen. We recommend printing messages required to understand progress and deployment failures. The rest of the information should be streamed to a file, then published as a deployment [artifact](/docs/projects/deployment-process/artifacts). ### Image size While it is fun to have gifs and fancy images for your projects consider the size of each image. Keep them under 100x100 pixels. This will reduce the amount of data you have to download from the Octopus Server. ## Deployment parallelism By default, Octopus will only run one process on each [deployment target](/docs/infrastructure/deployment-targets) at a time, queuing the rest. There may be times that you want to run multiple processes at a time. In those situations, there are three special variables that can be used to control the way Octopus runs steps in parallel: - `OctopusBypassDeploymentMutex` - allows for multiple processes to run at once on the target. - `Octopus.Acquire.MaxParallelism` - limits the maximum number of packages that can be concurrently deployed to multiple targets. - `Octopus.Action.MaxParallelism` - limits the maximum number of machines on which the action will concurrently execute For more details, see our [run multiple processes on a target simultaneously](/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously) page. ## Troubleshooting The best place to start troubleshooting your Octopus Server is to inspect the [Octopus Server logs](/docs/support/log-files). Octopus writes details for common causes of performance problems. ### Long running requests When HTTP requests take a long time to be fulfilled you'll see a message similar to: `Request took 5123ms: GET {correlation-id}`: The timer is started when the request is first received, ending when the response is sent. Actions to take when you see messages similar to this in your log: - Look for trends as to which requests are taking a really long time. - Look to see if the performance problem occurs, and goes away, on a regular basis. This can indicate another process hogging resources periodically. ### Slow loading dashboard or project overview pages Long retention policies usually cause this. Consider tightening up your retention policies to keep less releases. It can also be caused by the sheer number of projects you are using to model your deployments. You can use the **CONFIGURE** button on the dashboard to limit the projects and/or environments shown to you. Filtering out the unneeded projects and environments on your dashboard can significantly reduce the amount of data needing to be returned, which will improve speed. ### Slow database If a particular database operation takes a long time you'll see a message similar to: `{Insert/Delete/Update/Reader} took 8123ms in transaction '{transaction-name}'`. The timer is started when the operation starts, ending when the operation is completed (including any retries for transient failure recovery). Actions to take when you see messages similar to this in your log: - If you are seeing these operations take a long time it indicates your SQL Server is struggling under load, or your network connection from Octopus to SQL Server is saturated. - Check the maintenance plan for your SQL Server. See [tips above](#sql-maintenance). - Test an extremely simple query like `SELECT * FROM OctopusServerNode`. If this query is slow it indicates a problem with your SQL Server. - Test a more complex query like `SELECT * FROM Release ORDER BY Assembled DESC`. If this query is slow it indicates a problem with your SQL Server, or the sheer number of Releases you are retaining. - Check the network throughput between the Octopus Server and SQL Server by trying a larger query like `SELECT * FROM Events`. ### Deployment screen is slow to load When the Task Logs are taking a long time to load, or your deployments are taking a long time the size of your task logs might be to blame. First refer to the [tips above](#tip-task-logs). After that, make sure the disks used by your Octopus Server have sufficient throughput/IOPS available for processing the demand required by your scenario. Task logs are written and read directly from disk. ### High resource usage during deployments When you experience overly high CPU or memory usage during deployments which may be causing your deployments to become unreliable: - Try reducing your Task Cap back towards the default of `5` and then increase progressively until your server is reliable again. - Look for potential [performance problems in your deployment processes](/docs/projects/deployment-process/performance), especially: - Consider how you [transfer your packages](#package-transfer). - Consider reducing the amount of parallelism in your deployments by reducing the number of steps you run in parallel, or the number of machines you deploy to in parallel. ### Connection pool timeout Seeing the error message `System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.` in your log indicates two possible scenarios: - Your SQL Queries are taking a long time, exhausting the SQL Connection Pool. Investigate what might be making your SQL Queries take longer than they should and fix that if possible - see earlier troubleshooting points. - If your SQL Query performance is fine, and your SQL Server is running well below its capacity, perhaps your Octopus Server is under high load. This is perfectly normal in many situations at scale. If your SQL Server can handle more load from Octopus, you can increase the SQL Connection Pool size of your Octopus Server node(s). This will increase the amount of active connections Octopus is allowed to open against your SQL Server at any point in time, effectively allowing your Octopus Server to handle more concurrent requests. Try increasing the `Max Pool Size` in your `SQL Connection String`in the `OctopusServer.config` file to something like `200` (the default is `100`) and see how everything performs. Learn about [Connection Strings and Max Pool Size](https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring). - Octopus is leaking SQL Connections. This should be very rare, but has happened in the past and we fix every instance we find. We recommend upgrading to the latest version of Octopus and [get help from us](#support) if the problem persists. :::div{.hint} Analyzing Octopus Server log files for performance problems is much easier in a tool like [Seq](https://getseq.net). We've built a [helpful tool](https://github.com/OctopusDeploy/SeqFlatFileImport) for importing Octopus Server and Task Logs directly into Seq for analysis. ::: ## Getting help from us \{#support} If none of the above troubleshooting steps work, please get in contact with our [support team](https://octopus.com/support) and send along the following details to help us debug: 1. An overview of the problem and when it occurs (page load, during a deployment, only when doing lots of deployments, etc.) 1. Frequency of the problem happening (on every deployment, on initial startup, etc.) 1. Observed correlations (during a deployment the dashboard is slow to load, during active directory sync unable for users to login, etc.) 1. A high level overview of your Octopus Deploy instance: - Version of Octopus Deploy installed - How many nodes your Octopus Deploy instance has - The server specs for each node (CPU/Memory) 1. Database details - What version of SQL Server - Where the SQL Server is hosted what are the DTUs or hardware specs (CPU/Memory) - Overall database size - Number of rows per table (see query below) - Last time indexes were rebuilt and stats were regenerated 1. Utilization during of resources (CPU/Disk/Memory %) used during peak times vs. non-peak This query will return all the rows in all the tables in the Octopus Deploy database. ```sql SELECT QUOTENAME(SCHEMA_NAME(obj.schema_id)) + '.' + QUOTENAME(obj.name) AS [TableName], SUM(dmv.row_count) AS [RowCount] FROM sys.objects AS obj INNER JOIN sys.dm_db_partition_stats AS dmv ON obj.object_id = dmv.object_id WHERE obj.type = 'U' AND obj.is_ms_shipped = 0x0 AND dmv.index_id in (0, 1) GROUP BY obj.schema_id, obj.name ORDER BY Obj.name ``` In addition to providing the above information, gathering logs and traces will help us troubleshoot your performance problem. 1. Attach a screen recording showing the performance problem or charts showing the Octopus Server performance. If problem happens at certain times, please attach charts and screen recordings before and during those events. 1. [Record and attach the performance problem occurring in your web browser](/docs/support/record-a-problem-with-your-browser) (if applicable). 1. Attach the [Octopus Server logs](/docs/support/log-files). 1. Attach the [raw task logs](/docs/support/get-the-raw-output-from-a-task) for any tasks exhibiting the performance problem, or that may have been running at the same time as the performance problem. 1. If the performance problem is causing high CPU utilization on the Octopus Server, please [record and attach a performance trace](/docs/administration/managing-infrastructure/performance/record-a-performance-trace). 1. If the performance problem is causing high memory utilization on the Octopus Server, please [record and attach a memory trace](/docs/administration/managing-infrastructure/performance/record-a-memory-trace). 1. We might ask for a sanitized database backup to do our own testing against. Please [follow these instructions](/docs/administration/managing-infrastructure/performance/create-sanitized-database-backup). # Variable Recommendations Source: https://octopus.com/docs/best-practices/deployments/variables.md [Variables](/docs/projects/variables) allow you to parameterize your deployment and runbook process. That allows your processes to work across your infrastructure without having to hard-code or manually update configuration settings that differ across environments, deployment targets, channels, or tenants. There are multiple levels of variables in Octopus Deploy: 1. Project 2. Tenant 3. Step Templates 4. Variable Set 5. System Variables Project, Tenant, and Step Template variables are associated with their specific item and cannot be shared. Variable Sets can be shared between 1 to N Projects and Tenants. System variables are variables provided by Octopus Deploy you can use during deployments. During a deployment, Octopus will gather all the variables for the project, Tenant (when applicable), step template, associated variable sets, and system variables and create a "variable manifest" for each step to use. :::div{.hint} Multi-tenancy is an advanced topic with its own set of recommendations. Tenants were mentioned here so you could see the bigger picture of variables. ::: Octopus Deploy provides the ability to replace values in your configuration files using the [structured configuration variables](/docs/projects/steps/configuration-features/structured-configuration-variables-feature/) or the [.NET XML configuration variables feature](/docs/projects/steps/configuration-features/xml-configuration-variables-feature/). In addition, Octopus Deploy supports the ability to perform [.NET Configuration transforms](/docs/projects/steps/configuration-features/configuration-transforms) during deployment time. In addition to having the above levels of variables, there are also two categories of variables. 1. Variables used in configuration file replacement (connection strings, version number, etc.) 2. Variables specific to the deployment or runbook run (output variables, messages, accounts, etc.) ## Variable naming Without established naming conventions, variable name collisions are possible. A common example is when a project and a variable set have the same variable name scoped to the same environment. When a name collision occurs, Octopus Deploy will do its best to pick the ["right one" using an algorithm](/docs/projects/variables/getting-started/#scope-specificity). But sometimes, the variables are scoped equally. If this occurs, Octopus will choose project-defined variables ahead of library-defined ones. The recommendation is to avoid name collisions in the first place by following these naming standards. 1. Project: `Project.[Component].[Name]` - for example, **Project.Database.UserName.** 2. Tenant: `[ProjectName].[Component].[Name]` - for example, **OctoPetShopWebUI.URL.Port**. 3. Step Template: `[TemplateShortName].[Component].[Name]` - for example, **SlackNotification.Message.Body**. 4. Variable Set: `Library.[LibrarySetName].[Component].[Name]` - for example, **Library.Notification.Slack.Message**. These naming conventions only apply to variables used for a deployment or runbook run. Variables used for configuration file replacement have a specific naming convention to follow. The above naming convention makes it easier to distinguish between the two. ## Configuration file replacement variables One of Octopus Deploy's most used features is environmental variable scoping. And with good reason, having the same process, only needing a specific value such as a connection string or domain name changed, ensures consistency during deployment. However, that has led some customers to attempt to make Octopus Deploy something other than what it is. Octopus Deploy is not a configuration management tool, secret store, or feature flag provider. Store the variables required for Octopus Deploy to successfully deploy your application, along with a minimum amount of configuration variables. :::div{.hint} Changing a feature flag or secret stored in Octopus Deploy requires you to deploy or run a runbook to update the file manually. Leverage best-in-breed tools for storing secrets or feature flags that were designed with that use case in mind. Octopus Deploy should store the necessary connection information to those platforms as sensitive variables. It should update the appropriate configuration file entries or set environmental variables to connect successfully to those tools. ::: Some examples of configuration variables include: - Database Connection Strings, including username and password - Connection details to a secret store - Domain names - Server ports - Service URLs There are also variables only Octopus Deploy knows about. These include the release version number, environment name, and deployment date. For configurations that differ per environment, our recommendation is to use a combination of Octopus Deploy and configuration files stored in source control. You'd have three levels of configuration files: - Main configuration file (appSettings.json) - Environment-specific configuration file (appSettings.Development.json) - Tenant-specific configuration file (appSettings.MyTenantName.json) Octopus Deploy can set an environment variable or configuration value during deployment to indicate which environment-specific configuration file to use. Or, if you are using .NET Framework, you can leverage [configuration file transforms](/docs/projects/steps/configuration-features/configuration-transforms). ## Variable Sets [Variable Sets](/docs/projects/variables/library-variable-sets) are a great way to share variables between projects. We recommend the following when creating variable sets. - Don't have a single "global" variable set. This becomes a "junk drawer" of values and quickly becomes unmanageable. And not every project will need all those variables. - Group common variables into a variable set. Examples include Notifications, Azure, AWS, Naming, and so on. - Have application-specific variable sets to share items such as service URLs, database connection strings, etc., across the multiple projects that make up an application. ## Permissions A common scenario we've talked to customers about is restricting variable edit access to specific environments. For example, a developer can edit any variables scoped to **development** and **test** environments, but not **staging** or **production** environments. On paper this makes sense, in practice this causes messy handovers and claims of "it worked on my machine." The developers working on the application know all the various settings and variables required for their application to work. Our recommendations for variable edit permissions are: - Variable edit permissions should be all or nothing, either a person can edit variables, or they cannot. Don't scope permissions to environments. Anyone responsible for the application should have permission to update variables (developers, lead developers, DB developers, etc.) along with operations (DBAs, web admins, sysadmins) who can create and update service accounts and passwords. - Variable Sets can be shared across multiple projects. Limit who can edit variable set variables to more experienced Octopus Deploy users, or people who understand "with great power comes great responsibility." Typically, we see senior or lead developers along with operations people who have these permissions. If you want to isolate an application, consider using [spaces](/docs/administration/spaces). - Leverage [sensitive variables](/docs/projects/variables/sensitive-variables) to encrypt and hide sensitive values such as usernames and passwords. Sensitive variables are write-only in the Octopus UI and Octopus API. - Use [composite variables](/docs/projects/variables/variable-substitutions/#binding-variables) to combine sensitive and non-sensitive values. A typical use case is database connection strings. Each language has a specific syntax. In the screenshot below `Project.Database.ConnectionString` is the composite variable, with the username and password referenced by the composite variable, but they are separate sensitive variables. :::figure ![composite variables](/docs/img/getting-started/best-practices/images/composite-variables.png) ::: ## Further reading For further reading on variables in Octopus Deploy please see: - [Variables](/docs/projects/variables) - [Scoping Variables](/docs/projects/variables/#scoping-variables) - [Structured Configuration Variables](/docs/projects/steps/configuration-features/structured-configuration-variables-feature) - [.NET XML Configuration Variables](/docs/projects/steps/configuration-features/xml-configuration-variables-feature) - [.NET Configuration Transforms](/docs/projects/steps/configuration-features/configuration-transforms) - [Variable Sets](/docs/projects/variables/library-variable-sets) # Export a certificate Source: https://octopus.com/docs/deployments/certificates/export-certificate.md Certificates can be downloaded from Octopus to your local machine. The certificate may be exported in any of the [supported file-formats](/docs/deployments/certificates), or exactly as it was originally uploaded. :::figure ![](/docs/img/deployments/certificates/images/download-certificate-btn.png) ::: ## Private-keys If the certificate includes a private-key, then user requires the _Export certificate private-keys_ permission to download the certificate in a format which includes the private-key. Exporting a certificate with a private-key will be [audited](/docs/security/users-and-teams/auditing). # Error handling Source: https://octopus.com/docs/deployments/custom-scripts/error-handling.md Calamari examines the exit code of the script engine to determine whether the script failed. If the exit code is zero, Calamari assumes the script ran successfully. If the exit code is non-zero, then Calamari assumes the script failed. Syntax errors and unhandled exceptions will result in a non-zero exit code from the script engine, which will fail the deployment ## Error handling in PowerShell scripts {#error-handling-powershell} For PowerShell scripts Calamari also sets the `$ErrorActionPreference` to **Stop** before invoking your script. This means that if a command fails, the rest of the script won't be executed. For example: ```powershell Write-Output "Hello" Write-Error "Something went wrong" Write-Output "Goodbye" ``` The third line will not be executed. To change this behavior, set `$ErrorActionPreference` to **Continue** at the top of your script. At the end of the script, Calamari also checks `$LastExitCode` to see if the last Windows-based program that you invoked exited successfully. Note that some Windows programs use non-zero exit codes even when they run successfully - for example, Robocopy returns the number of files copied. This can mean that Calamari assumes your script failed even if it actually ran successfully. Best practice is to call `Exit 0` yourself if your script ran successfully. Note that you'll need to check `$LastExitCode` yourself if you run multiple Windows programs. For example, with this script, Calamari would correctly see that ping returned an exit code of 1 (the host couldn't be contacted) and will assume the script failed: ```powershell & ping 255.255.255.0 # Host does not exist, will return exit code 1 ``` But if your script looks like this, Calamari will only examine the exit code from the last line (which is successful), so it will assume the script was successful. ```powershell & ping 255.255.255.0 # Host does not exist, will return exit code 1 & ping 127.0.0.1 # Host exists, will return exit code 0 ``` The best practice here is to always check the exit code when invoking programs: ```powershell & ping 255.255.255.0 if ($LastExitCode -ne 0) { throw "Couldn't find 255.255.255.0" } & ping 127.0.0.1 if ($LastExitCode -ne 0) { throw "Couldn't find 127.0.0.1" } ``` ## Failing a script with a message The fail step function will stop your script execution and return a non-zero error code. An optional message can be supplied. If supplied, the message replaces the `The remote script failed with exit code ` text in the deployment process overview page.
      PowerShell ```powershell Fail-Step "A friendly message" ```
      C# ```csharp FailStep("A friendly message"); ```
      Bash ```bash fail_step "A friendly message" ```
      F# ```fsharp Octopus.failStep "A friendly message" ```
      Python3 ```python failstep("A friendly message") ```
      # Docker Containers Source: https://octopus.com/docs/deployments/docker.md Following on from the original [Octopus-Docker blog post](https://octopus.com/blog/docker-windows-octopus) and subsequent [RFC](https://octopus.com/blog/rfc-docker-integration), Octopus Deploy is taking the approach to treat Docker images as immutable build artifacts that are moved through each stage of deployment by running them as containers with deploy-time specific configuration. We feel this best follows the container mentality and avoids trying to re-invent container build and orchestration tools that already exist. We feel however, that Octopus Deploy still plays a crucial part in this process to allow your container deployments to integrate into your full deployment pipeline, through a staged environment lifecycle and alongside other non-container phases. Maintaining centralized auditing, configuration and orchestration of the whole deployment process from start to finish is not a problem solved by containers, and this is where our focus and expertise at Octopus Deploy lies. ## Windows Containers on Windows Server** While Docker Containers on Windows Server (not Docker for Windows through Docker Toolbox) are now generally available, this feature appears to still have some issues with key areas such as networking. This is an area that the Docker and Windows team are actively improving. While deploying a Windows Container to a Tentacle target on Windows should work, you may experience issues trying to use custom networks or volumes. We would suggest using Linux targets via SSH for the time being until this feature stabilizes. ## How Docker Containers Map to Octopus concepts In Octopus Deploy, a deployment usually involves a versioned instance of package that is obtained from some package feed. Prior to 3.5.0, this was typically modeled by defining a NuGet server (e.g. MyGet, TeamCity) as the package repository, which exposes a list of named packages to be deployed. Each instance of this package existed as a versioned .nupkg file which would be obtained by the target at deployment time and extracted. :::figure ![](/docs/img/deployments/docker/images/5865809.png) ::: With the introduction of support for Docker, a similar concept exists whereby a Docker Registry (e.g. DockerHub, Artifactory) exposes a list of Images (unfortunately in Docker terminology these are known as repositories) which can be tagged with one (or more) values. By treating and interpreting the tags as version descriptions for a given Image, a Docker deployment can map to a similar versioned process flow. :::figure ![](/docs/img/deployments/docker/images/5865811.png) ::: The Octopus concepts of feeds, packages and versions can be mapped to the Docker concepts of registries, images and tags. There is a slight caveat to this similarity since Octopus does not currently intend to self-host a Docker registry in the server, so there is no Docker equivalent of the built-in feed. Also, the targets currently need to have access to the repository to pull down images as there is no push process from the Octopus Server. :::figure ![](/docs/img/deployments/docker/images/5865808.png) ::: ## Learn more - [Docker blog posts](https://octopus.com/blog/tag/docker/1) - [Docker registries as feeds](/docs/packaging-applications/package-repositories/docker-registries) - [Accessing container details](/docs/deployments/docker/accessing-container-details) # Branching Source: https://octopus.com/docs/deployments/patterns/branching.md This section describes how different branching strategies can be modeled in Octopus Deploy. ## Branching strategies When thinking about branching and Octopus, keep this rule in mind: > Octopus doesn't care about branches. It cares about NuGet packages. Your build server cares about source code and branches, and uses them to compile and [package your application](/docs/packaging-applications). Octopus, on the other hand, only sees packages. It doesn't particularly care which branch they came from, or how they were built, or which source control system you used. The section below describes some common branching strategies, and what they mean in terms of NuGet packages and releases in Octopus. ### No branches The simplest branching workflow is, of course, no branches - all developers work directly on `trunk` or the `main` (default) branch. For small projects with few developers, and when the project isn't really in "production" yet, this strategy can work well. :::figure ![](/docs/img/deployments/patterns/images/3278438.png) ::: Builds from this single branch will produce a NuGet package, and that package goes into a release which is deployed by Octopus. :::figure ![](/docs/img/deployments/patterns/images/3278468.png) ::: ### Release branches Sometimes developers work on new features that aren't quite ready to ship, while also maintaining a current production release. Bugs can be fixed on the release branch, and deployed, without needing to also ship the half-baked features. :::figure ![](/docs/img/deployments/patterns/images/3278439.png) ::: So long as one release branch never overlaps another, from an Octopus point of view, the process is similar to the "no branches" scenario above - new NuGet packages are built, and those packages go into a release, which is deployed. Octopus doesn't care that they came from a branch; to Octopus, there's just a stream of new, incrementing package versions. ### Multiple active release branches Multiple release branches may be supported over a period of time. For example, you may have customers who are using your 2.x versions of your software in production, and early adopters testing your 3.x versions while you work to make it stable. You'll need to fix bugs in the 2.x version as well as the 3.x version, and deploy them both. :::figure ![](/docs/img/deployments/patterns/images/3278440.png) ::: To prevent [retention policies](/docs/administration/retention-policies) for one channel from impacting deployments for another channel, version `3.12.2` introduces the [Discrete Channel Releases setting](/docs/releases/channels/#discrete-channel-releases). Enabling this feature will also ensure that your project overview dashboard correctly shows which releases are current for each environment _in each channel_. Without this set, the default behavior is for releases across channels to supersede each other (for example, in a hotfix scenario where the `3.2.2-bugfix` is expected to override the `3.2.2` release, allowing `3.2.2` to be considered for retention policy cleanup). ![Discrete channel release](/docs/img/deployments/patterns/images/discrete-channel-release.png) Modeling this in Octopus is a little more complicated than the scenarios above, but still easy to achieve. If the only thing that changes between branches is the NuGet package version numbers, and you create releases infrequently, then you can simply choose the correct package versions when creating a release via the release creation page: :::figure ![](/docs/img/deployments/patterns/images/3278469.png) ::: If you plan to create many releases from both branches, or your deployment process is different between branches, then you will need to use channels. [Channels](/docs/releases/channels) are a feature in Octopus that lets you model differences in releases: :::figure ![](/docs/img/deployments/patterns/images/3278470.png) ::: In this example, packages that start with 2.x go to the "Stable" channel, while packages that start with 3.x go to the "Early Adopter" channel. :::div{.hint} **Tip: Channels aren't branches** When designing channels in Octopus, don't think about channels as another name for branches: - **Branches** can be short-lived and tend to get merged, and model the way code changes in the system. - **Channels** are often long-lived, and model your release process. For example, [Google Chrome have four different channels](https://www.chromium.org/getting-involved/dev-channel) (Stable, Beta, Dev, and Canary). Their channels are designed around user's tolerance for bleeding edge features vs. stability. Underneath, they may have many release branches contributing to those channels. It's important to realize that **branches will map to different channels over time**. For example, right now, packages from the `release/v2` branch might map to your "Stable" channel in Octopus, while packages from `release/v3` go to your "Early Adopter" channel. Eventually, `release/v3` will become more and more stable, and packages from it will eventually go to your Stable channel, while `release/v4` packages will begin to go to your Early Adopter channel. ::: ### Feature branches Feature branches are usually short-lived, and allow developers to work on a new feature in isolation. When the feature is complete, it is merged back to the `trunk` or the `main` (default) branch. Often, feature branches are not deployed, and so don't need to be mapped in Octopus. :::figure ![](/docs/img/deployments/patterns/images/3278442.png) ::: If feature branches do need to be deployed, then you can create NuGet packages from them, and then release them with Octopus as per normal. To keep feature branch packages separate from release-ready packages, [we recommend using SemVer tags](https://docs.nuget.org/create/versioning#really-brief-introduction-to-semver) in the NuGet package version. You should be able to [configure your build server to generate version numbers based on the feature branch](https://octopus.com/blog/teamcity-version-numbers-based-on-branches). :::figure ![](/docs/img/deployments/patterns/images/3278443.png) ::: Again, channels can be used to make it easier to create releases for feature branches: :::figure ![](/docs/img/deployments/patterns/images/3278471.png) ::: ### Environment branches A final branching strategy that we see is to use a branch per environment that gets deployed to. Code is promoted from one environment to another by merging code between branches. :::figure ![](/docs/img/deployments/patterns/images/3278444.png) ::: We do not like or recommend this strategy, as it violates the principle of [Build your Binaries Once](https://octopus.com/blog/build-your-binaries-once). - The code that will eventually run in production may not match 100% the code run during testing. - It's easy for a merge to go wrong and result in different code than you expected running in production. - Packages have to be rebuilt, and different dependencies might be used. You can make this work in Octopus, by creating a package for each environment and pushing them to environment-specific [feeds](/docs/packaging-applications/package-repositories), and then binding the NuGet feed selector in your package steps to an environment-scoped variable: However, on the whole, this isn't a scenario we've set out to support in Octopus, and we don't believe it's a good idea in general. ## Other considerations The above section describes common branching strategies and how they map to NuGet packages, releases and channels in Octopus. However, depending on your release process, there may be other things to consider. Below are some issues that often come up in relation to branching and Octopus. ### multiple branches can be "currently deployed" at the same time Normally in Octopus, a single release for a project is deployed to a single environment at a time - for example, only one release is "currently" in Production. When you have multiple active release branches, or sometimes even feature branches, it might be that you actually have more than one "current" release. For example: - The stable channel is deployed to the same web servers as the Early Adopter channel, but each goes to a different IIS website. - The stable channel and Early Adopter channels go to different web servers. - Each feature branch goes to its own virtual directory. Your dashboard in Octopus should reflect this reality by displaying each channel individually: :::figure ![](/docs/img/deployments/patterns/images/3278472.png) ::: ### My branches are very different, and i need my deployment process to work differently between them Sometimes a new branch might introduce a new component that needs to be deployed, which doesn't exist in the old branch. If you use channels, you can scope deployment steps and variables to channels to support this. For example, the Rate Service package was added as part of v3, so currently only applies to the Early Adopter channel: :::figure ![](/docs/img/deployments/patterns/images/3278473.png) ::: Likewise, it has variables that only apply on Early Adopter: :::figure ![](/docs/img/deployments/patterns/images/3278474.png) ::: For more advanced uses, you may need to clone your project. ### We sometimes need to make hotfixes that are deployed straight to staging/production Hotfixes are a special kind of release branch, but typically have a shorter lifecycle - they may need to go directly to production to fix a critical issue, and might skip certain deployment steps. Again, channels can handle this by creating a Hotfix channel, and assigning the Hotfix channel a different lifecycle: :::figure ![](/docs/img/deployments/patterns/images/3278475.png) ::: Likewise, steps can be defined that apply to the Stable channel, but not to the Hotfix channel: When releases are created for the Hotfix channel, they can then be deployed straight to production: :::figure ![](/docs/img/deployments/patterns/images/3278476.png) ::: While stable releases still follow the usual testing lifecycle: :::figure ![](/docs/img/deployments/patterns/images/3278477.png) ::: ### We need to deploy different components depending on whether it's a "full" release or a "partial" release You might have a large project with many components. Sometimes you only need to deploy a single component, while other times you may need to deploy all components together. This can be modeled by creating a channel per component, plus a channel for a release of all components. :::figure ![](/docs/img/deployments/patterns/images/3278478.png) ::: Steps can then be scoped to their individual channel as well as the major release channel: :::figure ![](/docs/img/deployments/patterns/images/3278479.png) ::: When creating the release, you can then choose whether the release is for an individual component or all components: :::figure ![](/docs/img/deployments/patterns/images/3278480.png) ::: ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Terraform plugin cache Source: https://octopus.com/docs/deployments/terraform/plugin-cache.md Terraform allows plugins to be cached to improve performance and remove the need to download plugins with each Terraform operation. However, there are considerations to take into account when configuring Terraform to use a plugin cache directory. The Terraform steps in Octopus expose a **Terraform plugin cache directory** field. When specified, the steps will copy the contents of the directory into the Calamari working directory, and then set the `TF_PLUGIN_CACHE_DIR` environment variable to point to the copied directory. Copying the directory allows multiple Octopus workers to reuse plugins while avoiding the [concurrency issue](https://github.com/hashicorp/terraform/issues/25849) in Terraform. The downside to referencing a copied directory is that all newly downloaded plugins will not be retained. To retain complete control over how Terraform accesses a plugin cache directory, leave the **Terraform plugin cache directory** field blank, and define the environment variables to be passed to Terraform directly on the step. This allows the `TF_PLUGIN_CACHE_DIR` environment variable (or any others) to be set to any value. However, when configuring these values manually, it is the responsibility of the end user to account for the [concurrency limitations](https://github.com/hashicorp/terraform/issues/25849) in Terraform. You can find additional information on the settings used by Octopus to manage concurrency [here](/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously). # Approvals with Manual Interventions Source: https://octopus.com/docs/getting-started/first-deployment/approvals-with-manual-interventions.md The **Manual Intervention Required** step lets you add approvals or manual checks to your deployment process. When manual intervention occurs, the deployment will pause and wait for approval or rejection from a member of a nominated responsible team. ## Add manual intervention step 1. From the *Hello world deployment* project you created earlier, click **Process** in the left menu. 2. Click **Add Step**. 3. Select the **Other** category to filter the types of steps. 4. Locate the Manual Intervention Required card and click **Add Step**. :::figure ![Add Manual Intervention Required step to deployment process](/docs/img/getting-started/first-deployment/images/manual-intervention-step.png) ::: ### Step name You can leave this as the default *Manual Intervention Required*. ### Instructions 5. Copy the message below and paste it into the **Instructions** field. ``` Please verify the Production environment is ready before proceeding. ``` ### Responsible Teams 6. Select **Octopus Administrators** and **Octopus Managers** from the **Responsible Teams** dropdown list. ### Environments 7. Select **Run only for specific environments**. 8. Select **Production** from the **Environments** dropdown list. You can skip the other sections of this page for this tutorial. ## Reorder deployment steps Currently, your deployment process will run manual intervention after the script step. In a real deployment scenario, it makes more sense to run manual intervention before any other step. 1. Click the overflow menu **⋮** next to the **Filter by name** search box and click **Reorder Steps**. 2. Reorder the steps so manual intervention is at the top of the list. 3. Click **Done**. 4. **Save** your deployment process. :::figure ![Reorder steps](/docs/img/getting-started/first-deployment/images/reorder-steps.png) ::: ## Release and deploy 1. Create a new release and deploy it through to the Production environment. You will notice manual intervention doesn’t run in the Development or Staging environments. When the deployment reaches Production, it will pause and request approval. :::figure ![Manual intervention is required in production](/docs/img/getting-started/first-deployment/images/manual-intervention.png) ::: Your project is coming together well! Next, let's add a [deployment target](/docs/getting-started/first-deployment/add-deployment-targets). ### All guides in this tutorial series 1. [First deployment](/docs/getting-started/first-deployment) 2. [Define and use variables](/docs/getting-started/first-deployment/define-and-use-variables) 3. Approvals with manual interventions (this page) 4. [Add deployment targets](/docs/getting-started/first-deployment/add-deployment-targets) 5. [Deploy a sample package](/docs/getting-started/first-deployment/deploy-a-package) ### Further reading for approvals - [Manual Intervention and Approvals](/docs/projects/built-in-step-templates/manual-intervention-and-approvals) - [Deployments](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # Runbook specific variables Source: https://octopus.com/docs/getting-started/first-runbook-run/runbook-specific-variables.md Octopus allows you to define variables and scope them for use in different environments when running a Runbook. Variables allow you to have a consistent Runbook process across your infrastructure without having to hard-code or manually update configuration settings that differ across environments, deployment targets, channels, or tenants. In addition to environments, you can scope variables to specific Runbooks. 1. From the *Hello world* project you created earlier, click **Variables** in the left menu. 1. Enter **Helloworld.Greeting** into the variable name column on the first row of the table. 1. Add **Hello, Development Runbook** into the value column. 1. Click the **Scope** column and select the `Development` environment and *Hello runbook* process. 1. Click **ADD ANOTHER VALUE** button. 1. Add **Hello, Test Runbook** and scope it to the `Test` environment and *Hello runbook* process. 1. Click **ADD ANOTHER VALUE** button. 1. Add **Hello, Production Runbook** and scope it to the `Production` environment and *Hello runbook* process. 1. Click the **SAVE** button. :::figure ![The hello world variables](/docs/img/getting-started/first-runbook-run/images/variables.png) ::: :::div{.hint} During a runbook run or deployment, Octopus will select the most specifically scoped variable that applies. In the screenshot above, when running *Hello Runbook* in **Production**, Octopus will select `Hello, Production Runbook`. When running a different runbook or doing a deployment to **Production**, Octopus will select `Hello, Production`. ::: Steps in the runbook process can reference the variables. 1. Click **Runbooks** on the left menu. 1. Click *Hello Runbook* in the list of runbooks. 1. Click **Process** in the runbook menu. 1. Select the script step. 1. Change the script in the script step based on your language of choice:
      PowerShell ```powershell Write-Host $OctopusParameters["Helloworld.Greeting"] ```
      Bash ```bash greeting=$(get_octopusvariable "Helloworld.Greeting") echo $greeting ```
      :::div{.hint} If you are using Octopus Cloud, Bash scripts require you to select the **Hosted Ubuntu** worker pool. The **Default Worker Pool** is running Windows and doesn't have Bash installed. ::: 6. Click the **SAVE** button. 7. Click the **RUN...** button, select and environment, and run the Runbook. :::figure ![The results of the hello world runbook run with variables](/docs/img/getting-started/first-runbook-run/images/runbook-run-with-variables.png) ::: The next step will [add deployment targets to run runbooks on](/docs/getting-started/first-runbook-run/add-runbook-deployment-targets). **Further Reading** For further reading on Runbook variables please see: - [Runbook Variables](/docs/runbooks/runbook-variables) - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # Glossary Source: https://octopus.com/docs/getting-started/glossary.md Octopus Deploy is a deployment tool. It takes your build server's packages and artifacts and deploys them to various targets using a safe and consistent process. Targets you can deploy to include Windows, Linux, Azure, AWS, and Kubernetes. We do our best to make Octopus Deploy as user friendly as possible. However, it covers such a wide range of technologies, and at times can be complex. Our recommendation for learning Octopus Deploy is similar to learning any tool. 1. Start with a Proof of Concept or a POC to understand the core concepts. 2. Assign a pilot project to learn how to use the tool to deploy to Production. 3. Bring on other projects and learn how to scale Octopus Deploy. As you proceed through each phase you will need to learn new terms and concepts. This page breaks down those terms and concepts into each phase we think it is useful to learn. ## POC phase terms When first setting up a POC or Hello World project you will become familiar with the following terms and concepts. - **Octopus Server**: responsible for hosting the Octopus Web Portal, the Rest API, and orchestrating deployments. - **Self-Hosted**: When the Octopus Server application is installed on your infrastructure. You manage all the upgrades and other maintenance, along with when an upgrade occurs as well as to what version. - [**Octopus Cloud**](/docs/octopus-cloud): When the Octopus Server application is hosted by Octopus Deploy (the company). We manage all the upgrades and maintenance, and we determine when to upgrade and the version to upgrade to. - [**Infrastructure**](/docs/infrastructure): made up of the servers, services, and accounts where the Octopus Server will deploy your software. - [**Tentacle**](/docs/security/octopus-tentacle-communication/): The service responsible for facilitating communication between the Octopus Server and your [Linux](/docs/infrastructure/deployment-targets/linux/) or [Windows-based](/docs/infrastructure/deployment-targets/tentacle/windows) servers. - [**Listening Tentacle**](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended): The Tentacle communication mode in which all traffic is inbound from the Octopus Server to the Tentacle. The Tentacle is the TCP server, and Octopus Server is the TCP client. - [**Polling Tentacle**](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles): The Tentacle communication mode in which all traffic is outbound from the Tentacle to the Octopus Server. The Tentacle is the TCP client, and Octopus Server is the TCP Server. - [**Deployment Targets**](/docs/infrastructure/deployment-targets): The servers, machines, or cloud services where you will deploy your software and services. - [**Environments**](/docs/infrastructure/environments): Environments let you organize your deployment targets (whether on-premises servers or cloud services) into groups representing the different stages of your deployment pipeline, for example, development, test, and production. - [**Projects**](/docs/projects): contain the deployment process, configuration variables, and runbooks to deploy and manage your software. - [**Deployment Process**](/docs/projects/deployment-process): The recipe for deploying your software. You define the recipe by adding steps and variables to a project. - [**Deployment Steps**](/docs/projects/steps): The specific action (or set of actions) executed as part of the deployment process each time your software is deployed. - [**Release**](/docs/releases): A snapshot of the deployment process and the associated assets (packages, scripts, variables) as they existed when the release was created. - [**Manual Interventions**](/docs/projects/built-in-step-templates/manual-intervention-and-approvals): The approval step in Octopus Deploy. Manual interventions can be scoped to specific teams and environments so they can be skipped on deployments to dev or testing but required for deployments to production. - [**Variables**](/docs/projects/variables): A value stored in the Octopus Server for use in different phases of your deployments. Variables can be scoped to environments, steps, and more. Variables allow you to have a consistent deployment process across your infrastructure without having to hard-code or manually update configuration settings that differ across environments, deployment targets, channels, or tenants. - **Library**: where you store build artifacts and other assets that can be used across multiple projects. - [**Packages**](/docs/packaging-applications): An archive ([zip, tar, Nuget](/docs/packaging-applications/#supported-formats)) that contains your application assets (binaries, .js files, .css files, .html files, etc.). - [**Feed**](/docs/packaging-applications/package-repositories): The package repository. Octopus Deploy has a built-in feed, as well as support for external feeds such as TeamCity, Azure DevOps, Docker, MyGet, Maven, Artifactory, Cloudsmith, GitHub, and more. - [**Deployments**](/docs/deployments) are the execution of the deployment process with all the associated details as they existed when the release was created. - **Raw Log**: The unfiltered and raw look at the deployment log. During the deployment Octopus will capture the output of each step and save it for review. - **Task Log**: The raw log formatted so it is easier to read on a web page. - **Task History**: The audit history of the deployment. Includes who and when a deployment was triggered, who and when a manual intervention was approved, and more. ## Pilot phase terms As you move on from the POC phase to the Pilot phase you should familiarize yourself with these terms and concepts. - **Octopus Server** - **Task**: A unit of work performed by the Octopus Server. A task can be a deployment, a machine health check, a runbook run, and more. All tasks are dropped onto the task queue and picked up in a FIFO order (unless the task is scheduled to run at a specific time). - [**Task Cap**](/docs/support/increase-the-octopus-server-task-cap): How many concurrent tasks the Octopus Server can process. For self-hosted instances this can be increased from the default of 5. - [**Spaces**](/docs/administration/spaces): A feature built in to Octopus Server to allow you to partition your server for different teams and projects. Each space has its own projects, teams, environments, infrastructure, library, and more. - **Infrastructure** - [**Workers**](/docs/infrastructure/workers): Workers are machines that can execute tasks that don't need to be run on the Octopus Server or individual deployment targets. - [**Worker Pools**](/docs/infrastructure/workers/worker-pools): A group of workers. One pool might be in a particular network security zone. Another pool might have a specific set of tools installed. - [**Accounts**](/docs/infrastructure/deployment-targets/#accounts): Credentials that are used during your deployments, including things like username/password, tokens, Azure and AWS credentials, and SSH key pairs. - **Projects** - [**Runbooks**](/docs/runbooks): Runbooks automate routine maintenance and emergency operations tasks like infrastructure provisioning, database management, and website failover and restoration. - **Library** - [**Lifecycles**](/docs/releases/lifecycles): Give you control over the way releases of your software are promoted between your environments. - [**Variable Sets**](/docs/projects/variables/library-variable-sets): Collections of variables that can be shared between multiple projects. - [**Step Templates**](/docs/projects/custom-step-templates): Pre-configured steps created by you to be reused in multiple projects. - [**Community Step Templates**](/docs/projects/community-step-templates): Step templates contributed by the Octopus Community. - **Deployments** - [**Deployment Changes**](/docs/releases/deployment-changes): The summarization of all the releases rolled up and included since the previous deployment to the deployment environment. - [**Artifacts**](/docs/projects/deployment-process/artifacts): Files collected from remote machines during the deployment which can be downloaded from the Octopus Web Portal for review. ## General adoption phase terms After the pilot phase is successful it is time to start bringing other projects on board. As you do that you should familiarize yourself with these terms and concepts. - **Octopus Server** - **Instance**: The database, file share, and 1 to N nodes running the Octopus Server service. Each self-hosted Octopus Deploy license allows for three active instances. - [**Node**](/docs/administration/high-availability/maintain/maintain-high-availability-nodes): An individual server running the Octopus Server in an Octopus Instance. - [**High Availability**](/docs/administration/high-availability): High availability is where you run multiple Octopus Servers to distribute load and tasks between them for a single Octopus Deploy instance. - **Infrastructure** - [**Built-in Worker**](/docs/security/built-in-worker): The underlying worker built in to the Octopus Server to allow you to run part of your deployment process without the need to install an external worker. Please note, this only applies to self-hosted Octopus. The built-in worker is disabled in Octopus Cloud. - [**Health Check**](/docs/infrastructure/deployment-targets/machine-policies/#health-check): A task Octopus periodically runs on deployment targets and workers to ensure that they are available. - [**Machine Policies**](/docs/infrastructure/deployment-targets/machine-policies): Groups of settings that can be applied to Tentacle and SSH endpoints used for health checks, updating calamari, and more. - [**Machine Proxies**](/docs/infrastructure/deployment-targets/proxy-support): Machine proxies allow you to specify a proxy server for Octopus to use when communicating with Tentacles or SSH Targets; you can also specify a proxy server when a Tentacle and the Octopus Server make web requests to other servers. - **Projects** - [**Projects**](/docs/projects) contain the deployment process, configuration variables, and runbooks to deploy and manage your software. - [**Channels**](/docs/releases/channels/): How a [lifecycle](/docs/releases/lifecycles) is associated with a project. Every project has at least one channel. - [**Runbook Publishing**](/docs/runbooks/runbook-publishing): A snapshot of the runbook process and associated assets (packages, scripts, variables) as they existed when the snapshot was created. - [**Triggers**](/docs/projects/project-triggers): Triggers automate your deployments and runbooks by responding to deployment target changes or time-based schedules. - **Library** - [**Script Modules**](/docs/deployments/custom-scripts/script-modules): Script modules let you use language-specific functions that can be used in deployment processes across multiple projects. - **Deployments** - [**Guided Failure Mode**](/docs/releases/guided-failures): An option to prompt a user to intervene when a deployment encounters an error so the deployment can continue. # Username and password accounts Source: https://octopus.com/docs/infrastructure/accounts/username-and-password.md A username/password account can be used to connect [SSH deployment targets](/docs/infrastructure/deployment-targets/linux/ssh-target/) and services like Google Cloud Platform if you are using the [Kubernetes](/docs/deployments/kubernetes) functionality in Octopus. ## Enabling username and password authentication on Linux {#UsernameandPassword-EnablingUsername&PasswordAuthentication} Depending on your SSH target machine's distribution you may need to enable password authentication. To allow the Octopus Server to connect using the provided credentials you the will need to modify the sshd\_config file on the target machine: 1. Open the /etc/ssh/sshd_config file. 1. Find the line that contains: `PasswordAuthentication` and change it to: `PasswordAuthentication yes`. 1. Restart the SSH service under root privileges: `service ssh restart`. If you experience problems connecting, it may help to try connecting directly to the target machine using these credentials though a client like putty. This will help eliminate any network related problems with your Octopus configuration. :::div{.warning} **Different distributions use different conventions** While the above instructions should work on common platforms like Ubuntu or Red Hat, you may need to double-check the details for specific instructions relating to SSH authentication on target operating system. There are many different Linux based distributions, and some of these have their own unique way of doing things. For this reason we cannot guarantee that these SSH instructions will work in every case. ::: ## Create a username and password account {#UsernameandPassword-Creatingtheaccount} 1. Navigate to **Deploy ➜ Manage ➜ Accounts** and click **ADD ACCOUNT**. 1. Select **Username/Password** from the drop-down menu. 1. Give the account a name, for instance, **SSH backup server** or **Google**. 1. Add a description. 1. Add the username and password you use to authenticate against the remote host. 1. If you want to restrict which environments can use the account, select only the environments that are allowed to account. If you don't select any environments, all environments will be allowed to use the account. 1. Click **SAVE**. The account is now ready to be used when you configure your [SSH deployment target](/docs/infrastructure/deployment-targets/linux/ssh-target). # Remove Octopus Target Command Source: https://octopus.com/docs/infrastructure/deployment-targets/dynamic-infrastructure/remove-octopustarget.md ## Delete target Command: **_Remove-OctopusTarget_** | Parameter | Value | | ----------------- | -------------------------------------- | | `-targetIdOrName` | The Name or Id of the target to delete | Example: ```powershell Remove-OctopusTarget -targetIdOrName "My Azure Web Application" ``` # Troubleshooting Tentacles Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/troubleshooting.md All 'classic' problems of TCP networking: firewalls, proxies, timeouts, DNS issues, and so-on can affect Octopus Tentacles. This guide will help to track down these issues when either a machine cannot be "Discovered" (Listening Tentacles) or "Registered" (Polling Tentacles) with the Octopus Server, or a previously working machine fails a health-check with errors from the networking stack. :::div{.problem} **WARNING** A breaking change in Tentacle releases with version 6.3+ means that all versions above 6.2.277 will require .NET 4.8 or above to run. This is a Microsoft dependency due to EOL for older .NET versions. ::: ## Identify the problem If you are having problems with a previously-working machine, or you've successfully "Discovered" or "Registered" a machine but can't get communication to work afterwards, you can find information in four places: 1. If the machine has been included in a Health Check or Deployment, examine the Raw Task Log. There's a link to this on the page containing the details of the Health Check or Deployment, which can usually be located using the *Tasks* page in the Octopus Web Portal. 2. On the *Infrastructure* page of the Octopus Web Portal, click on the problem machine and select the *Connectivity* tab. There's often specific information about the communication status of the machine here. 3. In the Octopus Web Portal, open **Configuration ➜ Diagnostics**. Information on this page can be helpful to work out what's going on in the Octopus installation. Look at the information under *Server logs* and searching for the machine's name or IP address can turn up useful information. 4. On the target itself you can inspect the Tentacle [log files](/docs/support/log-files) to see what is going on during a deployment or health check. ## Check and Restart the Octopus and Tentacle services Before following the steps below, it can be worthwhile to restart the Octopus and Tentacle services, and refresh the browser you're using to connect to the Octopus Web Portal. Neither action *should* fix a communication problem, but sometimes they can help flush a problem out. ### Check the Octopus and Tentacle services are running If you're successfully connecting to the Octopus Web Portal with your web browser, you can be confident the Octopus Server service is running. The Tentacle Manager usually shows correct service status, but it pays to double-check. *On the Tentacle machine*, open the Windows Services Control Panel applet (`services.msc`) and look for "OctopusDeploy Tentacle". Verify that the service is in the "Running" state. **If the service is not running...** If the Tentacle service is not running, you can try to start it from the Services applet. Allow 30 seconds for the service to start work, then refresh the Services screen. **If the Tentacle service keeps running**, go back to the Octopus Web Portal and try Health Checking the affected machine again. **If the service stops**, it is likely that the service is crashing during startup; this can be caused by a number of things, most of which can be diagnosed from the Tentacle log files. Inspect these yourself, and either send the [log files](/docs/support/log-files) or extracts from them showing the issue to the Octopus Deploy Support email address for assistance. If the service is running, continue to the next step. ### Restart the Octopus services Open the Octopus Manager app, and select **Restart**. Alternatively, open the **Services** app, find **OctopusDeploy**, and click restart. ### Restart the Tentacle service Open the Tentacle Manager app, and select **Restart**. Alternatively, open the **Services** app, find **OctopusDeploy Tentacle**, and click restart. ## Communication mode At this point it's worth briefly revisiting the concept of **Listening Tentacles** and **Polling Tentacles**. As you troubleshoot problems with your Tentacles, please pay attention to which communication mode they are configured for. Review [Tentacle communication modes](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication). ### Specific troubleshooting for each communication mode - [Listening Tentacles](/docs/infrastructure/deployment-targets/tentacle/troubleshooting/troubleshooting-listening) - [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/troubleshooting/troubleshooting-polling) # Automating Tentacle installation Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/automating-tentacle-installation.md The Tentacle agent can be automatically installed from the command line. This is very useful if you're deploying to a large number of servers, or you're provisioning servers automatically. :::div{.warning} **Cloning Tentacle VMs** In a virtualized environment, it may be desirable to install Tentacle on a base virtual machine image, and clone this image to create multiple machines. If you choose to do this, please **do not complete the configuration wizard** before taking the snapshot. The configuration wizard generates a unique per-machine cryptographic certificate that should not be duplicated. Instead, use PowerShell to automate configuration after the clone has been materialized. ::: ## Tentacle installers \{#AutomatingTentacleinstallation-Tentacleinstallers} Tentacle comes in an MSI that can be deployed via group policy or other means. ### Download the Tentacle MSI The latest Tentacle MSI can always be [downloaded from the Octopus Deploy downloads page](https://octopus.com/downloads). Permalinks to always get the latest MSIs are: - 32-bit: [https://octopus.com/downloads/latest/WindowsX86/OctopusTentacle](https://octopus.com/downloads/latest/WindowsX86/OctopusTentacle) - 64-bit: [https://octopus.com/downloads/latest/WindowsX64/OctopusTentacle](https://octopus.com/downloads/latest/WindowsX64/OctopusTentacle) To install the MSI silently run the following command: ```bash msiexec /i Octopus.Tentacle..msi /quiet ``` By default, the Tentacle files are installed under **%programfiles(x86)%**. You can change the installation directory, with the following command: ```bash msiexec INSTALLLOCATION=C:\YourDirectory /i Octopus.Tentacle..msi /quiet ``` :::div{.problem} While you can set a custom INSTALLLOCATION for the Tentacle, please be aware that upgrades initiated by Octopus Server will install the upgraded Tentacle in the default location. This may have an impact if you are using the [Service Watchdog](/docs/administration/managing-infrastructure/service-watchdog). ::: ## Configuration \{#AutomatingTentacleinstallation-Configuration} The MSI installer simply extracts files and adds some shortcuts and event log sources. The actual configuration of Tentacle is done later, and this can be automated too. To configure the Tentacle in listening or polling mode, it's easiest to run the installation wizard once, and at the end, use the **Show Script** option in the setup wizard. This will show you the command-line equivalent to configure a Tentacle. ### Advanced configuration options When configuring your Tentacle, you can configure advanced options, such as [proxies](/docs/infrastructure/deployment-targets/proxy-support/), [machine policies](/docs/infrastructure/deployment-targets/machine-policies/), and [tenants](/docs/tenants/tenant-infrastructure), which can also be automated. Use the setup wizard to configure the Tentacle, and click the **Show Script** link which will show you the command-line equivalent to configure the Tentacle. ## Example: Listening Tentacle \{#AutomatingTentacleinstallation-Example-ListeningTentacle} The following example configures a [listening Tentacle](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#listening-tentacles-recommended), and registers it with an Octopus Server: **Using Tentacle.exe to create Listening Tentacle instance** ```bash cd "C:\Program Files\Octopus Deploy\Tentacle" Tentacle.exe create-instance --instance "Tentacle" --config "C:\Octopus\Tentacle.config" --console Tentacle.exe new-certificate --instance "Tentacle" --if-blank --console Tentacle.exe configure --instance "Tentacle" --reset-trust --console Tentacle.exe configure --instance "Tentacle" --home "C:\Octopus" --app "C:\Octopus\Applications" --port "10933" --console Tentacle.exe configure --instance "Tentacle" --trust "YOUR_OCTOPUS_THUMBPRINT" --console "netsh" advfirewall firewall add rule "name=Octopus Deploy Tentacle" dir=in action=allow protocol=TCP localport=10933 Tentacle.exe register-with --instance "Tentacle" --server "http://YOUR_OCTOPUS" --apiKey="API-YOUR_API_KEY" --role "web-server" --environment "Staging" --comms-style TentaclePassive --console Tentacle.exe service --instance "Tentacle" --install --start --console ``` You can also register a Tentacle with the Octopus Server after it has been installed by using Octopus.Client (i.e. register-with could be omitted above and the following could be used after the instance has started. See below for how to obtain the Tentacle's thumbprint): **Using Octopus.Client to register a Tentacle in an Octopus Server** ```powershell Add-Type -Path 'Newtonsoft.Json.dll' Add-Type -Path 'Octopus.Client.dll' $octopusURI = 'https://your-octopus-url' $octopusApiKey = 'API-YOUR-KEY' $endpoint = new-object Octopus.Client.OctopusServerEndpoint $octopusURI, $octopusApiKey $repository = new-object Octopus.Client.OctopusRepository $endpoint $tentacle = New-Object Octopus.Client.Model.MachineResource $tentacle.name = "Tentacle registered from client" $tentacle.EnvironmentIds.Add("Environments-1") $tentacle.Roles.Add("WebServer") $tentacleEndpoint = New-Object Octopus.Client.Model.Endpoints.ListeningTentacleEndpointResource $tentacle.EndPoint = $tentacleEndpoint $tentacle.Endpoint.Uri = "https://YOUR_TENTACLE:10933" $tentacle.Endpoint.Thumbprint = "YOUR_TENTACLE_THUMBPRINT" $repository.machines.create($tentacle) ``` :::div{.hint} Want to register your Tentacles another way? Take a look at our [examples](/docs/octopus-rest-api/examples/deployment-targets/) for ways to register Tentacles using the [Octopus REST API](/docs/octopus-rest-api). ::: ## Example: Polling Tentacle \{#AutomatingTentacleinstallation-Example-PollingTentacle} The following example configures a [Polling Tentacle](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles), and registers it with an Octopus Server: **Polling Tentacle** ```bash cd "C:\Program Files\Octopus Deploy\Tentacle" Tentacle.exe create-instance --instance "Tentacle" --config "C:\Octopus\Tentacle.config" --console Tentacle.exe new-certificate --instance "Tentacle" --if-blank --console Tentacle.exe configure --instance "Tentacle" --reset-trust --console Tentacle.exe configure --instance "Tentacle" --home "C:\Octopus" --app "C:\Octopus\Applications" --noListen "True" --console Tentacle.exe register-with --instance "Tentacle" --server "http://YOUR_OCTOPUS" --name "YOUR_TENTACLE_NAME" --apiKey "API-YOUR_API_KEY" --comms-style "TentacleActive" --server-comms-port "10943" --force --environment "YOUR_TENTACLE_ENVIRONMENTS" --role "YOUR_TENTACLE_TARGET_TAG" --console Tentacle.exe service --instance "Tentacle" --install --start --console ``` :::div{.hint} **Tips:** - If you are running this from a PowerShell remote session, make sure to add `--console` at the end of each command to force Tentacle.exe not to run as a service. - Want to register your Tentacles another way? Take a look at our [examples](/docs/octopus-rest-api/examples/deployment-targets/) for ways to register Tentacles using the [Octopus REST API](/docs/octopus-rest-api). ::: ## Obtaining the Tentacle thumbprint \{#AutomatingTentacleinstallation-tentaclethumbprintObtainingtheTentacleThumbprint} If you don't know the thumbprint for the above PowerShell scripts, it can be obtained with the following command line option: ```bash Tentacle.exe show-thumbprint --instance "Tentacle" --nologo ``` ## Export and import Tentacle certificates without a profile When the Tentacle agent is configured, the default behavior is to generate a new X.509 certificate. When automating the provisioning of Tentacles on a machine, however, you may run into problems when trying to generate a certificate when running as a user without a profile loaded. A simple workaround is to generate a certificate on one machine (such as your workstation), export it to a file, and then import that certificate when provisioning Tentacles. ## Generating and exporting a certificate Install the Tentacle agent on a computer, and run the following command: ```powershell tentacle.exe new-certificate -e MyFile.txt ``` The output file will now contain a base-64 encoded version of a PKCS#12 export of the X.509 certificate and corresponding private key. This file is now ready to be used in your setup scripts. ## Importing a certificate When automatically provisioning your Tentacle, the commands typically look something like this: ```powershell Tentacle.exe create-instance --instance "Tentacle" --config "C:\Octopus\Tentacle\Tentacle.config" --console Tentacle.exe new-certificate --instance "Tentacle" --console Tentacle.exe configure --instance "Tentacle" --home "C:\Octopus" --console ... ``` Replace the `new-certificate` command with `import-certificate`. For example: ```powershell Tentacle.exe create-instance --instance "Tentacle" --config "C:\Octopus\Tentacle\Tentacle.config" --console Tentacle.exe import-certificate --instance "Tentacle" -f MyFile.txt --console Tentacle.exe configure --instance "Tentacle" --home "C:\Octopus" --console ... ``` ## Desired State Configuration Tentacles can also be installed via [Desired State Configuration](https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview) (DSC). Using the module from the [OctopusDSC GitHub repository](https://www.powershellgallery.com/packages/OctopusDSC), you can add, remove, start and stop Tentacles in either Polling or Listening mode. The following PowerShell script will install a Tentacle listening on port `10933` against the Octopus Server at `https://YOUR_OCTOPUS`, add it to the `Development` environment and assign the `web-server` and `app-server` target tags: **DSC Configuration** ```powershell Configuration SampleConfig { param ($ApiKey, $OctopusServerUrl, $Environments, $Roles, $ListenPort) Import-DscResource -Module OctopusDSC Node "localhost" { cTentacleAgent OctopusTentacle { Ensure = "Present" State = "Started" # Tentacle instance name. Leave it as 'Tentacle' unless you have more # than one instance Name = "Tentacle" # Registration - all parameters required ApiKey = $ApiKey OctopusServerUrl = $OctopusServerUrl Environments = $Environments Roles = $Roles # Optional settings ListenPort = $ListenPort DefaultApplicationDirectory = "C:\Applications" } } } # Execute the configuration above to create a mof file SampleConfig -ApiKey "API-YOUR_API_KEY" -OctopusServerUrl "https://YOUR_OCTOPUS/" -Environments @("Development") -Roles @("web-server", "app-server") -ListenPort 10933 # Run the configuration Start-DscConfiguration .\SampleConfig -Verbose -wait # Test the configuration ran successfully Test-DscConfiguration ``` ### Settings and properties To review the latest available settings and properties, refer to the [OctopusDSC Tentacle readme.md](https://github.com/OctopusDeploy/OctopusDSC/blob/master/README-cTentacleAgent.md) in the GitHub repository. DSC can be applied in various ways, such as [Group Policy](https://sdmsoftware.com/group-policy-blog/desired-state-configuration/desired-state-configuration-and-group-policy-come-together/), a [DSC Pull Server](https://docs.microsoft.com/en-us/powershell/scripting/dsc/pull-server/pullserver), [Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview), or even via configuration management tools such as [Chef](https://docs.chef.io/resources/dsc_resource/) or [Puppet](https://github.com/puppetlabs/puppetlabs-dsc). A good resource to learn more about DSC is the [Channel 9 Getting Started with DSC series](https://channel9.msdn.com/Series/Getting-Started-with-PowerShell-DSC). For an in depth look, check out the [sample walk-through](/docs/infrastructure/deployment-targets/tentacle/windows/azure-virtual-machines/via-an-arm-template-with-dsc) of how to use DSC with an Azure ARM template to deploy and configure the Tentacle on an Azure VM. ## Rootless Instance Creation {#rootless-instance-creation} Creating a named instance with the `--instance` parameter as shown in the examples above will register the instance details in a central registry to allow it to be easily managed via its unique name. Access to this central registry on the target machine is under `C:\ProgramData\Octopus` on Windows and `/etc/octopus` on other Platforms. For some high-security low-trust environments, access to these locations may not be possible, so Octopus supports creating Tentacle instances that isolate all their configuration in a single directory. Omitting the `--instance` and `--configuration` parameters from the [create-instance](/docs/octopus-rest-api/tentacle.exe-command-line/create-instance) command will create the `Tentacle.config` configuration file in the current working directory of the executing process. As such, it will not require any elevated permissions to create. However, relevant OS permissions may still be necessary depending on the ports used. To manage this instance, all ensuing commands are required to be run either with the executable being invoked from the context of the initial configuration directory or with the `--config` parameter pointing to the configuration file that was created in that directory. For example, running the following commands: ```bash mkdir ~/mytentacle && cd ~/mytentacle tentacle create-instance ``` will create a Tentacle configuration file in `~/mytentacle` without needing access to the shared registry (typically stored on Linux at `/etc/octopus`). Subsequent commands to this instance can be performed by running the command directly from that location: ```bash cd ~/mytentacle tentacle configure --trust F9EFD9D31A04767AD73869F89408F587E12CB23C ``` ### Service Limitations {#service-limitations} Due to the non-uniquely-named nature of these instances, only one such instance type can be registered as a service at any given time. An optional mechanism for running this instance is to use the [agent](/docs/octopus-rest-api/tentacle.exe-command-line/agent/) command to start and run the Tentacle process inline. The [delete-instance](/docs/octopus-rest-api/tentacle.exe-command-line/delete-instance) command will also have no effect, its purpose being largely to remove the instance details from the registry and preserving the configuration on disk. # Configure and apply a Kubernetes Service Source: https://octopus.com/docs/kubernetes/steps/kubernetes-service.md [Service resources](https://oc.to/KubernetesServiceResource), expose Pod resources either internally within Kubernetes cluster, or externally to public clients. The `Configure and apply a Kubernetes Service` steps can be used to configure and deploy a Service resource. ## Service name Each Service resource requires a unique name, defined in the `Name` field. The names must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character. ## Service type A Service resource can be one of three different types: * Cluster IP * Node Port * Load Balancer A Cluster IP Service resource provides a private IP address that applications deployed within the Kubernetes cluster can use to access other Pod resources. :::figure ![Cluster IP](/docs/deployments/kubernetes/cluster-ip.svg) ::: A Node Port Service resource provides the same internal IP address that a Cluster IP Service resource does. In addition, it creates a port on each Kubernetes node that directs traffic to the Service resource. This makes the service accessible from any node, and if the nodes have public IP addresses then the Node Port Service resource is also publicly accessible. :::figure ![Node Port](/docs/deployments/kubernetes/node-port.svg) ::: A LoadBalancer Service resource provides the same Cluster IP and Node Ports that the other two service resources provide. In addition, it will create a cloud load balancer that directs traffic to the node ports. The particular load balancer that is created depends on the environment in which the LoadBalancer Service resource is created. In AWS, an ELB or ALB can be created. Azure or Google Cloud will create their respective load balancers. :::figure ![Load balancer](/docs/deployments/kubernetes/loadbalancer.svg) ::: ## Cluster IP address The `Cluster IP Address` field can be used to optionally assign a fixed internal IP address to the Service resource. ## Ports Each port exposed by the Service resource has four common fields: Name, Port, Target Port and Protocol. The `Name` field assigns an optional name to the port. This name can be used by Ingress resource objects. The `Port` field defines the internal port on the Service resource that internal applications can use. The `Target Port` field defines the name or number of the port exposed by a container. The `Protocol` field defines the protocol exposed by the port. It can be `TCP` or `UDP`. If the Service resource is a NodePort or LoadBalancer, then there is an additional optional `Node Port` field that defines the port exposed on the nodes that direct traffic to the Service resource. If not defined, a port number will be automatically assigned. :::figure ![Service ports](/docs/deployments/kubernetes/ports.svg) ::: ### Service labels [Labels](https://oc.to/KubernetesLabels) are optional name/value pairs that are assigned to the Service resource. ### Service selector labels [Selector Labels](https://oc.to/KubernetesLabels) define the optional labels that must exist on the Pod resources in order for this Service resource to send traffic to them. :::div{.hint} There are some advanced use cases where creating a Service resource without selectors is useful. Refer to the [Kubernetes documentation](https://oc.to/KubernetesServicesWithoutSelectors) for more information. ::: ## Learn more - [Kubernetes blog posts](https://octopus.com/blog/tag/kubernetes/1) :::div{.hint} **Step updates** **2024.1:** - `Deploy Kubernetes service resource` was renamed to `Configure and apply a Kubernetes Service`. ::: # Octopus Cloud Task Cap Source: https://octopus.com/docs/octopus-cloud/task-cap.md Every Octopus Deploy instance has set number of concurrent tasks it can process. That number of concurrent tasks is known as the Octopus Task Cap. A task can be: - Deployments - Runbook run - Retention Policies - Health Checks - Let’s Encrypt - Process triggers - Process subscriptions - Script console run - Sync built-in package repository - Sync community library step-templates - Tentacle upgrade - Upgrade Calamari - Active Directory sync The most common tasks are deployments and runbook runs. The default task cap for Octopus Cloud instances is based on the license tier: - Starter: 5 - Professional: 5 - Enterprise: 20 Self-hosted customers have more control over their task cap. As such, every self-hosted instance starts out with a task cap of 5. A higher task cap requires more hosting resources. Self-hosted customers can change their instance's task cap via the Octopus Deploy UI. That is because self-hosted customers take on the responsibility of allocating resources, and paying any additional Azure, AWS, or GCP fees. ## Increasing the Task Cap for Octopus Cloud Octopus Cloud customers must [contact our Sales team](https://octopus.com/company/contact) to increase the task cap. Octopus Cloud provides the following Task Cap options: - Starter: 5 - Professional: 5, 10, 20 - Enterprise: 20, 40, 80, 160 Increasing the task cap will incur a corresponding increase in platform fees. Deployments and runbook runs are computationally expensive. More concurrent deployments and runbook runs requires more resources from the Octopus Cloud platform. We assign resources to each Octopus Cloud instance based on the task cap. Changing the task cap changes those resources. That requires a small outage as the instance and database are reprovisioned. We will wait until your next maintenance window to perform that reprovisioning. You might not see a change in the task cap until the next day. **Please note:** If you need a task cap higher than 160 please [contact our Sales team](https://octopus.com/company/contact) to discuss your use case. These options are meant to cover the majority of use cases. **Important:** 5, 10, 20, 40, 80, and 160 are the only options we offer. If you want an instance with a task cap above 160, again, [contact our Sales team](https://octopus.com/company/contact). There are no options between those tiers. For example, no Octopus Cloud instance can have a task cap of 15, 34, 45, or 68. ## How to choose a task cap We recommend task caps based upon the number and duration of deployments required for a production deployment. Deployments and runbook runs are the most common tasks. Deployments typically take longer than runbook runs. Production deployments are time constrained. They are done off-hours during an outage window. **Important:** These tables represent the _MAX_ number of deployments. Additional tasks such as runbook runs, retention policies, or health checks can reduce the number. Use these tables as guidelines. ### Task Cap 5 | Deployment Window | 10 Minute Deployments | 15 Minute Deployments | 30 Minute Deployments | | ----------------- | --------------------- | --------------------- | --------------------- | | 2 Hours | 60 Deployments | 40 Deployments | 20 Deployments | | 4 Hours | 120 Deployments | 80 Deployments | 40 Deployments | | 8 Hours | 240 Deployments | 160 Deployments | 80 Deployments | | 16 Hours | 480 Deployments | 320 Deployments | 160 Deployments | | 24 Hours | 720 Deployments | 480 Deployments | 320 Deployments | ### Task Cap 10 | Deployment Window | 10 Minute Deployments | 15 Minute Deployments | 30 Minute Deployments | | ----------------- | --------------------- | --------------------- | --------------------- | | 2 Hours | 120 Deployments | 80 Deployments | 40 Deployments | | 4 Hours | 240 Deployments | 160 Deployments | 80 Deployments | | 8 Hours | 480 Deployments | 320 Deployments | 160 Deployments | | 16 Hours | 960 Deployments | 640 Deployments | 320 Deployments | | 24 Hours | 1,440 Deployments | 960 Deployments | 640 Deployments | ### Task Cap 20 | Deployment Window | 10 Minute Deployments | 15 Minute Deployments | 30 Minute Deployments | | ----------------- | --------------------- | --------------------- | --------------------- | | 2 Hours | 240 Deployments | 160 Deployments | 80 Deployments | | 4 Hours | 480 Deployments | 320 Deployments | 160 Deployments | | 8 Hours | 960 Deployments | 640 Deployments | 320 Deployments | | 16 Hours | 1,920 Deployments | 1,280 Deployments | 640 Deployments | | 24 Hours | 2,880 Deployments | 1,920 Deployments | 960 Deployments | ### Task Cap 40 | Deployment Window | 10 Minute Deployments | 15 Minute Deployments | 30 Minute Deployments | | ----------------- | --------------------- | --------------------- | --------------------- | | 2 Hours | 480 Deployments | 320 Deployments | 160 Deployments | | 4 Hours | 960 Deployments | 640 Deployments | 320 Deployments | | 8 Hours | 1,920 Deployments | 1,280 Deployments | 640 Deployments | | 16 Hours | 3,840 Deployments | 2,560 Deployments | 1,280 Deployments | | 24 Hours | 5,760 Deployments | 3,840 Deployments | 1,920 Deployments | ### Task Cap 80 | Deployment Window | 10 Minute Deployments | 15 Minute Deployments | 30 Minute Deployments | | ----------------- | --------------------- | --------------------- | --------------------- | | 2 Hours | 960 Deployments | 640 Deployments | 320 Deployments | | 4 Hours | 1,920 Deployments | 1,280 Deployments | 640 Deployments | | 8 Hours | 3,840 Deployments | 2,560 Deployments | 1,280 Deployments | | 16 Hours | 7,680 Deployments | 5,120 Deployments | 2,560 Deployments | | 24 Hours | 11,520 Deployments | 7,680 Deployments | 3,840 Deployments | ### Task Cap 160 | Deployment Window | 10 Minute Deployments | 15 Minute Deployments | 30 Minute Deployments | | ----------------- | --------------------- | --------------------- | --------------------- | | 2 Hours | 1,920 Deployments | 1,280 Deployments | 640 Deployments | | 4 Hours | 3,840 Deployments | 2,560 Deployments | 1,280 Deployments | | 8 Hours | 7,680 Deployments | 5,120 Deployments | 2,560 Deployments | | 16 Hours | 15,360 Deployments | 10,240 Deployments | 5,120 Deployments | | 24 Hours | 23,040 Deployments | 15,360 Deployments | 7,680 Deployments | # octopus deployment-target listening-tentacle view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-listening-tentacle-view.md View a Listening Tentacle deployment target in Octopus Deploy ```text Usage: octopus deployment-target listening-tentacle view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target listening-tentacle view 'EU' octopus deployment-target listening-tentacle view Machines-100 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Deployment targets Source: https://octopus.com/docs/octopus-rest-api/examples/deployment-targets.md You can use the REST API to create and manage your [deployment targets](/docs/infrastructure/deployment-targets) in Octopus. Typical tasks can include: # Tentacle.exe command line Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line.md **Tentacle.exe** is the executable that runs the Octopus Tentacle instance. It includes several helpful commands that allow you to manage the instance some of which are built on top of the [Octopus Deploy HTTP API](/docs/octopus-rest-api). ## Commands {#tentacleCommandLine-Commands} `tentacle` supports the following commands: - **[agent](/docs/octopus-rest-api/tentacle.exe-command-line/agent)**: Starts the Tentacle Agent in debug mode. - **[checkservices](/docs/octopus-rest-api/tentacle.exe-command-line/checkservices)**: Checks the Tentacle instances are running. - **[configure](/docs/octopus-rest-api/tentacle.exe-command-line/configure)**: Sets Tentacle settings such as the port number and thumbprints. - **[create-instance](/docs/octopus-rest-api/tentacle.exe-command-line/create-instance)**: Registers a new instance of the Tentacle service. - **[delete-instance](/docs/octopus-rest-api/tentacle.exe-command-line/delete-instance)**: Deletes an instance of the Tentacle service. - **[deregister-from](/docs/octopus-rest-api/tentacle.exe-command-line/deregister-from)**: Deregisters this deployment target from an Octopus Server. - **[deregister-worker](/docs/octopus-rest-api/tentacle.exe-command-line/deregister-worker)**: Deregisters this worker from an Octopus Server. - **[extract](/docs/octopus-rest-api/tentacle.exe-command-line/extract)**: Extracts a NuGet package. - **[import-certificate](/docs/octopus-rest-api/tentacle.exe-command-line/import-certificate)**: Replace the certificate that Tentacle uses to authenticate itself. - **[list-instances](/docs/octopus-rest-api/tentacle.exe-command-line/list-instances)**: Lists all installed Tentacle instances. - **[new-certificate](/docs/octopus-rest-api/tentacle.exe-command-line/new-certificate)**: Creates and installs a new certificate for this Tentacle. - **[polling-proxy](/docs/octopus-rest-api/tentacle.exe-command-line/polling-proxy)**: Configure the HTTP proxy used by polling Tentacles to reach the Octopus Server. - **[poll-server](/docs/octopus-rest-api/tentacle.exe-command-line/poll-server)**: Configures an Octopus Server that this Tentacle will poll. - **[proxy](/docs/octopus-rest-api/tentacle.exe-command-line/proxy)**: Configure the HTTP proxy used by Octopus. - **[register-with](/docs/octopus-rest-api/tentacle.exe-command-line/register-with)**: Registers this machine as a deployment target with an Octopus Server. - **[register-worker](/docs/octopus-rest-api/tentacle.exe-command-line/register-worker)**: Registers this machine as a worker with an Octopus Server. - **[server-comms](/docs/octopus-rest-api/tentacle.exe-command-line/server-comms)**: Configure how the Tentacle communicates with an Octopus Server. - **[service](/docs/octopus-rest-api/tentacle.exe-command-line/service)**: Start, stop, install and configure the Tentacle service. - **[show-configuration](/docs/octopus-rest-api/tentacle.exe-command-line/show-configuration)**: Outputs the Tentacle configuration. - **[show-thumbprint](/docs/octopus-rest-api/tentacle.exe-command-line/show-thumbprint)**: Show the thumbprint of this Tentacle's certificate. - **[update-trust](/docs/octopus-rest-api/tentacle.exe-command-line/update-trust)**: Replaces the trusted Octopus Server thumbprint of any matching polling or listening registrations with a new thumbprint to trust. - **[version](/docs/octopus-rest-api/tentacle.exe-command-line/version)**: Show the Tentacle version information. - **[watchdog](/docs/octopus-rest-api/tentacle.exe-command-line/watchdog)**: Configure a scheduled task to monitor the Tentacle service(s). ## General usage {#Tentacle.exeCommandLine-Generalusage} All commands take the form of: ```powershell Tentacle [] ``` # Jenkins Source: https://octopus.com/docs/packaging-applications/build-servers/jenkins.md [Jenkins](https://www.jenkins.io/) is an extendable, open-source continuous integration server that makes build automation easy. Using Jenkins and Octopus Deploy together, you can: - Use Jenkins to compile, test, and package your applications. - Automatically trigger deployments in Octopus from Jenkins whenever a build completes. - Automatically fail a build in Jenkins if the deployment in Octopus fails. - Securely deploy your applications with Octopus Deploy across your infrastructure. - Fully automate your continuous integration and continuous deployment processes. Octopus Deploy will take those packages and push them to development, test, and production environments. ## Jenkins installation If you need guidance installing Jenkins for the first time, see the [Jenkins documentation](https://www.jenkins.io/doc/book/installing/), or the blog post, [installing Jenkins from Scratch](https://octopus.com/blog/installing-jenkins-from-scratch). ## Install the Octopus Jenkins plugin \{#install-the-octopus-jenkins-plugin} Plugins are central to expanding Jenkins' functionality, and a number of plugins are may be needed depending on the projects you are building. Before you start, you'll need to ensure the following plugins are installed and enabled. If you're building a .NET project: - [MSBuild Plugin](https://plugins.jenkins.io/msbuild/): to compile your Visual Studio solution. If you're building a Java project: - [Maven Plugin](https://plugins.jenkins.io/maven-plugin/): to compile your Java project. Once any of the above plugins are installed, you can then search and install the [Octopus Deploy Plugin](https://plugins.jenkins.io/octopusdeploy/). ## Configure the Octopus Deploy plugin After you have installed the Octopus Deploy plugin. First navigate to **Manage Jenkins ➜ Global Tool Configuration** to supply the details for the Octopus CLI. :::div{.success} **Creating API keys** Learn [how to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key) so the plugin can interact with your Octopus Server. ::: ### Octopus CLI This is a good time to install the [Octopus CLI](/docs/octopus-rest-api/octopus-cli). The [OctopusDeploy Plugin](https://plugins.jenkins.io/octopusdeploy/) is a wrapper for the [Octopus CLI](/docs/octopus-rest-api/octopus-cli), the Octopus command line tool for creating and deploying releases, such as `/home/your-user-name/.dotnet/tools/dotnet-octo`. You can do either of these: - Use the `dotnet tool install` command to install it, the [Octopus CLI Global Tool](/docs/octopus-rest-api/octopus-cli/install-global-tool) this works great on Linux and Windows. - [Download Octopus CLI](https://octopus.com/downloads) and extract it to a folder on your Jenkins server, such as `C:\Tools\Octo` or `/usr/local/bin`. Then we can let the plugin know where it is installed. :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/global-tools-cli-configure.png) ::: ### Configure system Next, navigate to **Manage Jenkins ➜ Configure System**. #### Octopus Server settings Here you can create the link to your Octopus Server. You can add more than one if your organization uses multiple servers. This is where you supply an API Key, select a Service Account with suitable permission and see [how to create an API key](/docs/octopus-rest-api/how-to-create-an-api-key) for it. :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/octopusdeploy-servers.png) ::: After the [Octopus Deploy Plugin](https://plugins.jenkins.io/octopusdeploy/) is configured, you can configure a Jenkins *Freestyle* or *Pipeline* Project. ## Build job During our Jenkins job, we will: 1. Compile the code, and run unit tests. 2. Create NuGet packages with OctoPack. 3. Publish these NuGet packages to the Octopus Server. 4. Create a release in Octopus, ready to be deployed. Jenkins uses the MSBuild plugin to compile .NET solutions, the Maven Plugin for Java solutions, or a variety of others depending on your tech/language stack. ## Add build steps The Octopus Jenkins plugin comes with these Octopus Build Steps: 1. **Octopus Deploy: Package Application** Create a NuGet or Zip formatted package. 1. **Octopus Deploy: Push Build Information** Add information about the build, including work items and commit messages, that is then stored in Octopus Deploy. 1. **Octopus Deploy: Push Packages** Push packages to the Octopus Deploy built-in repository. 1. **Octopus Deploy: Create Release** Create a new release in Octopus Deploy, and optionally deploys it to an environment. 1. **Octopus Deploy: Deploy Release** Deploy an existing release to a new environment. You can make use of any combination of these to achieve your deployment objective. They are Build Steps so you can have multiple instances of any of the types, it is a Jenkins limitation to have that with post-build actions. ![](/docs/img/packaging-applications/build-servers/jenkins/images/menu-options.png) ## Package application \{#package-application} Octopus supports multiple [package formats](/docs/packaging-applications/#supported-formats) for deploying your software. You can configure your Jenkins project to [package](/docs/octopus-rest-api/octopus-cli/pack) your application or other files on disk, without the need of any specification files, e.g. `.nuspec`. The two supported formats are `zip` and `nuget`. To see the full set of additional arguments that can be supplied see the [pack documentation](/docs/octopus-rest-api/octopus-cli/pack) [Pack syntax for Pipeline](/docs/packaging-applications/build-servers/jenkins/pipeline/#pack) :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/package-application.png) ::: This action works well combined with the next action `Push Packages`. ## Push packages \{#push-packages} Octopus can be used as a [NuGet package repository](/docs/packaging-applications/package-repositories/built-in-repository), using this action you can push packages to Octopus. This action will push all packages that match the `Package paths` supplied. :::div{.hint} Note that the package paths defined here should be full paths, not including any wildcards. ::: [Push Package syntax for Pipeline](/docs/packaging-applications/build-servers/jenkins/pipeline/#push) :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/push-packages.png) ::: ## Push build information \{#push-build-information} Build information is passed to Octopus as a file using a custom format. The Jenkins plugin also supports this feature. For more information see the [Build Information documentation](/docs/packaging-applications/build-servers/build-information). :::div{.info} When using build information in release notes in conjunction with [built-in package repository triggers (formerly known as _Automatic Release Creation_)](https://octopus.com/docs/projects/project-triggers/built-in-package-repository-triggers) the build information **must** be pushed to Octopus **before** the packages are pushed to Octopus as the release will be created as soon as the package configured for automatic release create is pushed. ::: The build information is associated with a package and includes: - Build URL: A link to the build which produced the package. - Commits: Details of the source commits related to the build. - Issues: Issue references parsed from the commit messages. This allows you to capture all related details and create clear traceability between build and deployment. [Push Build Information syntax for Pipeline](/docs/packaging-applications/build-servers/jenkins/pipeline/#build-information) :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/push-build-information.png) ::: As an example here is what build information looks like when attached to a release: :::figure ![Build information on release page](/docs/img/packaging-applications/build-servers/jenkins/images/build-information-release.png) ::: ## Creating a release \{#create-release} Jenkins is compiling our code and publishing packages to Octopus Deploy. If we wish, we can also have Jenkins automatically create (and optionally, deploy) a release in Octopus along with other supporting actions. :::div{.success} **Octopus CLI more information** Learn more about the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) and the arguments it accepts. ::: When this job runs, Jenkins will now not only build and publish packages, it will also create a release in Octopus Deploy. As an example here, we're relying on the `${BUILD_NUMBER}` value generated by Jenkins to use in the Release Version: `1.1.${BUILD_NUMBER}`. [Create Release syntax for Pipeline](/docs/packaging-applications/build-servers/jenkins/pipeline/#create-release) :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/create-release.png) ::: ## Deploying releases \{#deploying-releases} You might like to configure Jenkins to not only create a release, but deploy it to a *test environment*. This can be done by ticking the `Deploy this release after it is created?` option. Alternatively, you can use the `Deploy Release` action if you need to specify more deployment criteria, for example the channel or other required packages. :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/deploy-release.png) ::: A successful run looks like this: [Deploy Release syntax for Pipeline](/docs/packaging-applications/build-servers/jenkins/pipeline/#deploy-release) :::figure ![](/docs/img/packaging-applications/build-servers/jenkins/images/random-quotes-successful-run.png) ::: :::div{.success} **Contributing to the plugin** We welcome contributions; issues, bug fixes, enhancements. If you are starting to work on something more detailed please contact our [support team](https://octopus.com/support) to ensure it aligns with what we have going on, and that we are not doubling up efforts. Have a look at the [Octopus-Jenkins-Plugin repository](https://github.com/OctopusDeploy/octopus-jenkins-plugin) on GitHub. We also have the following [developer focused guidelines](https://github.com/OctopusDeploy/octopus-jenkins-plugin/blob/master/developer-guide/) to get you started working on the plugin. ::: ## Learn more - Generate an Octopus guide for [Jenkins and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?buildServer=Jenkins) - [Jira blog posts](https://octopus.com/blog/tag/jira/1) # Artifactory Generic Feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/artifactory-generic-feeds.md :::div{.warning} From version **Octopus 2024.1.4058** we support Artifactory Generic Repositories. This functionality is behind the `ArtifactoryGenericFeedFeatureToggle` feature toggle, to request this functionality early, please contact [support](https://octopus.com/support). ::: :::div{.warning} To use the Artifactory Generic Feeds feature you will need a PRO or higher license of Artifactory ::: If you're using an Artifactory Generic Repository, you can create a feed in Octopus and use artifacts as part of your deployments. To create a feed go to **Deploy ➜ Manage ➜ External Feeds**. Select the **Add feed** button and selecting the _Artifactory Generic Repository_ feed type. You will then need to provide a feed name, the Artifactory repository name, an [access token](https://oc.to/ArtifactoryAccessToken) and the repository [Artifact Path regex](https://jfrog.com/help/r/jfrog-artifactory-documentation/layout-configuration). :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-creation.png) ::: Artifactory generic feeds accommodate files of any type without requiring a specific file name structure. To handle this Artifactory supports [custom layouts](https://jfrog.com/help/r/jfrog-artifactory-documentation/layout-configuration). Custom layouts produce a regex expression that the file path and name must match, enabling Artifactory to extract the file type, version and module. This customizable package structure does not match Octopus's expected [package versioning conventions](/docs/packaging-applications/create-packages/versioning) used in other feeds. To handle this we depend on the Artifact Path Pattern regex expression available from Artifactory to be set on the feed in Octopus. To find the _Artifact Path Pattern_ go to **Administration ➜ Layouts ➜ Regular Expression View ➜ Resolve** and copy the Artifact Path Regex expression. A new repository will use the default `simple-layout` generated by Artifactory, the regex for this layout is (?.+?)/(?[^/]+)/(?\2)-(?[^/]+?)\.(?(?:(?!\d))[^\-/]+|7z). :::div{.warning} Important: To ensure that Octopus can correctly interact with Artifactory generic feeds, the custom layout pattern in Artifactory must include specific naming components. The layout pattern must specify the artifact name format as `[module]-[baseRev].[ext]`. It's crucial that the `[module]` and `[baseRev]` components are present, and that `[baseRev]` conforms to Semantic Versioning. For detailed guidance on configuring a custom layout that supports simple versioning, please refer to the following tutorials: * [Creating a Simple Versioning Custom Layout in Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation/configure-repository-layouts) * [Modules and Path Patterns used by Repository Layouts](https://jfrog.com/help/r/jfrog-artifactory-documentation/modules-and-path-patterns-used-by-repository-layouts) ::: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feeds-custom-layout.png) ::: The Octopus integration with Artifactory Generic Repositories depends on the artifacts matching the repository layout, specifically on the _module_ and _baseRev_ properties. An artifact can be tested whether it matches the layout regex by using the _Test Artifact Path Resolution_ in Artifactory. When artifacts match the layout pattern the [listing versions for a specific package](https://oc.to/ArtifactVersionSearch) endpoint will return a list of all available versions. This also provides the package information when viewing an artifacts details in the Artifactory UI. If the package information properties (Dependency Declaration) are not visible in the Artifactory UI Octopus will not be able to list versions, or download these artifacts. :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-package-information.png) ::: The regex layout in Artifactory is used to [list the versions of an artifact](https://oc.to/ArtifactVersionSearch). Searching and selecting a package uses the Artifactory Query Language to search within the repository, this does not depend on the layout. :::div{.warning} If a package has been found and selected but fetching versions fails when creating deployments this is likely due to the layout not matching the artifact within Artifactory. ::: On the test page, you can search for packages, this will return the packageId expected by Octopus along with the artifact details. The expected packageId is `path/module` where the path is the folder structure to the artifact returned from the AQL query items.find(...) and the module is determined by the regex expression set on the feed within Octopus. :::div{.warning} The package search for the feed is case-sensitive, so you must match the package's case in the Package Name field exactly to find the package. A package name can be partially searched for, or the full package name and version can be searched for. For example, for a package called 'FileTransferService-10.0.zip,' you can search for 'File', 'FileTransferService', or 'FileTransferService-10.0.zip'. ::: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-test.png) ::: ### Known limitations - Due to a limitation in the Octo CLI, that has been deprecated, our TeamCity plugin does not support creating releases for projects that use Artifactory Generic Feeds ### Example Repository Layout We see quite a few customers struggle with getting a generic feed layout to work in Octopus for a package ID and version search. To help with this, a basic example is included below: If you have a repository setup with the default `simple-layout` regex generated by Artifactory, (?.+?)/(?[^/]+)/(?\2)-(?[^/]+?)\.(?(?:(?!\d))[^\-/]+|7z), this translates to `[orgPath]/[module]/[module]-[baseRev].[ext]` Using the picture below provided by Artifactory for an Artifact path pattern: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-path-pattern.png) ::: We can then break that down and add a repository file layout structure to it: `[orgPath]` = OrgPath
      `[module]` = FileTransferService
      `[module]-[baseRev]` = FileTransferService-10.0
      `.[ext]` = .zip
      For this to work in an Octopus search, your artifactory generic feed has to be designed where: - The module has to match the package name: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-file-layout1.png) ::: - The filename must also match the module: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-file-layout2.png) ::: As mentioned previously in this documentation the file has to have a dependency declaration for it to show in Octopus: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-package-information.png) ::: Once you have set this up you should then be able to search for the package in Octopus: :::div{.warning} The package search for the feed is case-sensitive, so you must match the package's case in the Package Name field exactly to find the package. A package name can be partially searched for, or the full package name and version can be searched for. For example, for a package called 'FileTransferService-10.0.zip,' you can search for 'File', 'FileTransferService', or 'FileTransferService-10.0.zip'. ::: :::figure ![](/docs/img/packaging-applications/package-repositories/images/artifactory-generic-feed-package-search-example.png) ::: This example is based on the default `simple-layout` regex in Artifactory and so is not a definitive example. Other layouts will work as long as you ensure your feed is designed to match the regex you are using. We recommend using the `simple-layout` as it is the easiest one to set up and use. Please contact support@octopus.com if you are still struggling to see your packages in a search, and we will do our best to help. # Guides Source: https://octopus.com/docs/packaging-applications/package-repositories/guides.md Octopus can consume package feeds from the [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository), and supports the following external repositories: - [Docker feeds](/docs/packaging-applications/package-repositories/docker-registries). - [GitHub feeds](/docs/packaging-applications/package-repositories/github-feeds). - [Maven feeds](/docs/packaging-applications/package-repositories/maven-feeds). - [NuGet feeds](/docs/packaging-applications/package-repositories/nuget-feeds). - Helm feeds. - AWS ECR feeds. This section provides instructions on how to set-up a number of these external feeds from third-parties for use within Octopus. - [Configuring Container registries as external feeds in Octopus](/docs/packaging-applications/package-repositories/guides/container-registries) - [Configuring NuGet repositories as external feeds in Octopus](/docs/packaging-applications/package-repositories/guides/nuget-repositories) - [Configuring Maven repositories as external feeds in Octopus](/docs/packaging-applications/package-repositories/guides/maven-repositories) - [Cloudsmith Multi-format repositories](/docs/packaging-applications/package-repositories/guides/cloudsmith-feed) # GitLab container registry Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries/gitlab-container-registry.md GitLab creates a Container Registry for each Project or Group. By default, container registries are disabled on self-hosted installations of GitLab. To use the container registry, you must first enable it and assign a port for the registry to listen on. ## Adding a GitLab container registry as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `Docker Container Registry` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the GitLab server with the port the container registry listens on in the format: `https://your.gitlab.url:[GitLab container registry port]` :::figure ![GitLab NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/gitlab-container-feed.png) ::: Optionally add Credentials if they are required. # ProGet NuGet repository Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/nuget-repositories/proget-nuget-feed.md ProGet from Inedo is an package repository technology which contains a number of different feed types. This guide provides instructions on how to create a NuGet feed in ProGet and connect it to Octopus Deploy as an External Feed. ## Configuring a ProGet NuGet feed From the ProGet web portal, click on **Feeds ➜ Create New Feed** :::figure ![Create New Feed](/docs/img/packaging-applications/package-repositories/images/proget-create-feed.png) ::: Select the **NuGet (.NET) Packages** option from the `Developer Libraries` category :::figure ![NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/proget-create-nuget-feed.png) ::: Select **No Connectors (private container packages only)** from the wizard :::figure ![No Connectors](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/proget-connect-proget-feed.png) ::: Enter a name for your Feed, eg: ProGet-NuGet, then click **Create Feed** :::figure ![Feed Name](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/proget-create-feed-name.png) ::: The next screen allows you to set optional features for your feed, configure these features or click **Close**. Once the feed has been created, ProGet will display the `API endpoint URL` to push packages. In this example it's `https://proget.octopusdemos.app/nuget/ProGet-NuGet/v3/index.json` :::figure ![API endpoint URL](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/proget-nuget-api-endpoint.png) ::: ## Adding a ProGet NuGet as an Octopus External Feed Create a new Octopus Feed by navigating to **Deploy ➜ Manage ➜ External Feeds** and select the `NuGet Feed` Feed type. Give the feed a name and in the URL field, enter the HTTP/HTTPS URL of the ProGet server: `https://your.proget.url/nuget/feedname/v3/index.json` :::figure ![ProGet NuGet Feed](/docs/img/packaging-applications/package-repositories/guides/nuget-repositories/images/proget-external-feed.png) ::: Optionally add Credentials if they are required. # AWS S3 Bucket feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/s3-feeds.md If you are deploying packages located in an S3 bucket you can register them with Octopus and use as part of your deployments. Go to **Deploy ➜ Manage ➜ External Feeds**. You can add S3 feeds by clicking the **Add feed** button. You will then need to select if you want to explicitly specify the key and secret to use to connect to your AWS account or to use the account implicitly defined on your Octopus worker (for example in environment variables). Provide a name for the feed then click **Save and test**. ![](/docs/img/packaging-applications/package-repositories/images/s3-feed.png) The AWS S3 feed will try to connect to the bucket specified as part of the package name. For example, `test-bucket/test-package` will search for the package `test-package` in the `test-bucket` bucket. The account provided as part of the feed configuration must have access to the bucket. The AWS S3 Bucket feed follows the same [package versioning conventions](/docs/packaging-applications/create-packages/versioning) as other feeds. On the test page, you can check whether the feed is working by searching for packages: :::figure ![](/docs/img/packaging-applications/package-repositories/images/s3-feed-test.png) ::: ## Troubleshooting AWS S3 Bucket feeds - If you receive an error `Access Denied Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown.` then either: - The bucket you are trying to access is not in your AWS account (note that bucket names are globally unique). - Your AWS account does not have sufficient permissions to access the bucket. - Octopus will connect to the bucket via its regional endpoint. If you have a lot of packages in your bucket, consider moving it to the same region where your Octopus Server is located. # Custom step templates Source: https://octopus.com/docs/projects/custom-step-templates.md Sometimes there isn't a [built-in step template](/docs/projects/built-in-step-templates/) or a [community step template](/docs/projects/community-step-templates) available that does what you need. Or perhaps several of your projects have similar or identical steps. You can create your own custom step templates to reuse across your projects. You can also share them with the community. Custom step templates can be based on a built-in or installed community step template. These custom step templates can be reused in projects and managed in the step template library. ## Creating custom step templates To create your own step template, perform the following. 1. Navigate to the **Deploy ➜ Step templates** area and click **Create Step Template**. 2. Select a built-in step to base your custom step template on. 3. Populate the step template. There are three parts to any step template: 1. Step details 2. Additional parameters 3. Settings ## Step The Step tab is where you fill out the details of what the step will do. This tab gives you exactly the same fields as you would see if you added the step type directly to your project, so it will be the most familiar. Any details that need to be specified at the project level can be handled using parameters. Any parameters specified in the Parameters tab will be exposed to you as [variables](/docs/projects/variables) and can be used in the same way. ## Parameters The Parameters tab allows you to specify fields that will be filled out by the users of this step. :::figure ![Add new step template parameter](/docs/img/projects/images/step-templates-new-parameter.png) ::: You're required to give the parameter a variable name to use, as well as an optional label, help text and default value. You can choose the way the field will appear to a user with the **Control type** field. There are a number of options available, however keep in mind the end result will be a variable with a string value. Any variables you configure as parameters will be available as variables that can be used in the Step tab of the step template. ## Settings The Settings tab allows you to give your step a name and optional description. ## Usage After saving your step, you'll notice another page called Usage. This page shows where the step is being used and whether the version being used is current or a previous version. A warning icon will show next to the Usage link if any projects are out-of-date. You have the ability to filter database-backed usages by project, process type, and whether they are on the latest version of the step template or not. :::figure ![Step templates usage](/docs/img/projects/images/step-templates-usage.png) ::: If you have [version-controlled](/docs/projects/version-control) projects that use step templates, you will be able to see a tab with version-controlled usages from up to twenty recent releases. You can filter this list to search for usage in a specific branch or use the advanced filters. :::div{.hint} Note: The list of version-controlled usages will only include processes that have been released since converting to version control, and usages will be detected only if a version-controlled process used the step template at the time the release was created. ::: ## Custom logo Custom step templates inherit their logo from the template that was used to create them. This means that most of them will share the same logo. Fortunately this can be easily changed and each custom template can have its own unique logo. To do that, navigate to the Settings tab and upload a custom logo from there. ## Export your custom step template If you want to transport, backup, or share your custom step templates with the community, you can export a template by clicking the **Export** link. :::figure ![Export step templates](/docs/img/projects/images/step-templates-export.png) ::: Now you can take that exported template document and commit it to source control, or share it on the [Community Library](https://oc.to/community-library). :::div{.success} Take a look at the [contributing guide](https://github.com/OctopusDeploy/Library/blob/master/.github/CONTRIBUTING.md) for the Community Library and submit your step template as a [pull request](https://github.com/OctopusDeploy/Library/pulls). ::: ## Linking custom step templates to community step templates Once a day Octopus retrieves the latest step templates from the [Community Library](https://oc.to/community-library). At the end of that process it also tries to link the community step templates to any existing custom templates that have been imported manually in the past. Once the link is established, the custom template can receive updates directly from the [Community Library](https://oc.to/community-library). If all the properties **except the version property** match, the custom step template and the community step template will be linked. If the linking process isn't linking a template that you believe should be linked, then you may not have the latest version of the template. The easiest way to fix this problem is to manually update the template with the data from the [Community Library](https://oc.to/community-library). ## Running script based custom step templates You can run script based custom step templates on a group of machines. This can be very handy to execute script based step templates to test them before starting to use them in your projects as well as performing regular admin or operations functions. This should be familiar to people who have used the [script console](/docs/administration/managing-infrastructure/performance/enable-web-request-logging) in the past. :::div{.hint} It's important to note that you can only run script based custom step templates. It's not currently possible to execute step templates based on other step types. ::: To run a script based step template, perform the following. 1. Navigate to **Deploy ➜ Step templates** area and click the **Run** button next to the script based custom step template or alternately, select a script template and click the **Run** button from the template editor page: ![Run step template](/docs/img/projects/images/step-templates-run.png) 2. Select a group of targets to run the step on. This can be done by target name or by environments and tags. 3. Enter any required parameters. 4. Click the **Run now** button. This will execute the step as a new task. The full script can be found under the Template Parameters tab: ![Task parameters](/docs/img/projects/images/step-templates-run-task-parameters.png) To re-run the script against different deployment targets or modify the input parameters, simply click the **Modify and re-run** button in the overflow menu (`...`). ## Common step properties All steps have a name, which is used to identify the step. :::div{.warning} Be careful when changing names. Octopus commonly uses names as a convenient identity or handle to things, and the steps and actions in a deployment process are special in that way. For example, you can use [output variables](/docs/projects/variables/output-variables) to chain steps together, and you use the name as the indexer for the output variable. E.g. `#{Octopus.Action[StepA].Output.TestResult}` ::: ## Removing step templates For projects using Config as Code, it's up to you to take care to avoid deleting any step templates required by your deployments or runbooks. See our [core design decisions](/docs/projects/version-control/unsupported-config-as-code-scenarios#core-design-decision) for more information. ## Learn more - [Blog: Creating an Octopus Deploy step template](https://octopus.com/blog/creating-an-octopus-deploy-step-template) # .NET XML configuration variables feature Source: https://octopus.com/docs/projects/steps/configuration-features/xml-configuration-variables-feature.md The .NET XML configuration variables feature is one of the [.NET configuration features](/docs/projects/steps/configuration-features/) you can enable as you define the [steps](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process). This feature can be enabled for package deploy steps. :::figure ![.NET XML configuration variables screenshot](/docs/img/projects/steps/configuration-features/images/dotnet-xml-configuration-variables-feature.png) ::: Octopus will extract your package and parse your `*.config` files looking for any `appSettings`, `connectionStrings`, and `applicationSettings` elements where the name matches one of your [variables](/docs/projects/variables). :::div{.hint} You can perform simple convention-based replacements in .NET XML configuration files using this feature. We also have a feature tailored to [JSON, YAML, XML, and Properties configuration files](/docs/projects/steps/configuration-features/structured-configuration-variables-feature). If you are looking for something more flexible, we have the [Substitute Variables in Templates feature](/docs/projects/steps/configuration-features/substitute-variables-in-templates) enabling you to perform complex transformations on any kind of file. ::: ## How to use this feature The following example shows you how to use the this feature to provide your application with different configuration settings for each different environment you're deploying to. In this example, we're deploying to a **Test** and **Production** environment. Suppose you have this `web.config` or `MyApp.exe.config` file in your package which is configured for your local development environment: ```xml Hello developer! ``` 1. Create the variables in Octopus. From the [project](/docs/projects) overview page, click **Variables**: - Enter a the name for the variable, for instance, `AWSAccessKey`. **This name must match the key in your configuration file.** - Enter the value for the variable, for instance, `ABCDEFG`. - Scope the variable to the environment, for instance, `Test`. - Repeat the process for the **Production** environment, to give you a different value for the `AWSAccessKey` variable for each environment. 2. Repeat this for each element you want to replace in your configuration file. 3. Click **SAVE**. In this example, you would have variables similar to the following: | Variable Name | Value | Sensitive | Scope | | ----------------------- | --------------- | -------- | -------- | | `AWSAccessKey` | `ABCDEFG` | `No` | `Test` | | `AWSAccessKey` | `XXXXXXX` | `No` | `Production` | | `AWSSecretKey` | `1111111` | `Yes` | `Test` | | `AWSSecretKey` | `2222222` | `Yes` | `Production` | | `DBConnectionString` | `Server=test-server.your-company.com;Database=Test-Database;Integrated Security=SSPI` | `No` | `Test` | | `DBConnectionString` | `Server=prod-server.your-company.com;Database=Prod-Database;Integrated Security=SSPI` | `No` | `Production` | | `WelcomeMessage` | `Hello tester!` | `No` | `Test` | | `WelcomeMessage` | `Hello customer!` | `No` | `Production` | :::div{.warning} Variables marked sensitive (`AWSSecretKey` in this example) are encrypted in the Octopus database. During deployment they are encrypted during transport, but eventually decrypted and written in clear-text to the configuration files so your application can use the value. ::: 4. Deploy your project to the `Test` environment, and Octopus will update the configuration file to: ```xml Hello tester! ``` 5. Deploy your project to the `Production` environment, and Octopus will update the configuration file to: ```xml Hello customer! ``` Values are matched based on the `key` attribute for `appSettings`, and the `name` element for `applicationSettings` and `connectionStrings`. ## Replacing variables outside appSettings, applicationSettings and connectionStrings There may be other variables you would like Octopus to replace in your configuration files that are outside both the `appSettings`, `connectionStrings`, and `applicationSettings` areas. For example, changing the `loginUrl` for forms authentication in an ASP.NET application: ```xml ``` Learn how to do this [with a fully worked example](/docs/projects/steps/configuration-features/configuration-transforms/environment-specific-transforms-with-sensitive-values) which describes how Octopus can take care of your deployment environments, without impacting how you configure your application for your local development environment. This example uses the [.NET XML Configuration Transforms feature](/docs/projects/steps/configuration-features/configuration-transforms/) and [Substitute Variables in Templates feature](/docs/projects/steps/configuration-features/substitute-variables-in-templates) together. # Certificate variables Source: https://octopus.com/docs/projects/variables/certificate-variables.md In the variable-editor, selecting *Certificate* as the [variable](/docs/projects/variables) type allows you to create a variable with a certificate managed by Octopus as the value. :::figure ![](/docs/img/projects/variables/images/certificate-variable-select.png) ::: Certificate variables can be [scoped](/docs/projects/variables/#scoping-variables), similar to regular text variables. :::figure ![](/docs/img/projects/variables/images/certificate-variables-scoped.png) ::: ## Expanded properties At deploy-time, certificate variables are expanded. For example, a variable _MyCertificate_ becomes: | Variable | Description | Example value | | ---------------------- | ------------------ | ------------- | | `MyCertificate` | The certificate ID | Certificates-1 | | `MyCertificate.Type` | The variable type | Certificate | `MyCertificate.Name` | The user-provided name | My Development Certificate | `MyCertificate.Thumbprint` | Thumbprint | A163E39F59560E6FE33A0299D19124B242D9B37E | `MyCertificate.RawOriginal` | The base64 encoded original file, exactly as it was uploaded. | | `MyCertificate.Password` | The password specified when the file was uploaded. | | `MyCertificate.Pfx` | The base64 encoded certificate in [PKCS#12](https://datatracker.ietf.org/doc/html/rfc7292#page-9) format, including the private-key if present. If the originally uploaded certificate was password-protected (i.e. `MyCertificate.Password` is not empty), then this value will also be a password-protected PFX (PKCS#12) format. | `MyCertificate.Certificate` | The base64 encoded DER ASN.1 certificate. | | `MyCertificate.PrivateKey` | The base64 encoded DER ASN.1 private key. This will be stored and transmitted as a [sensitive variable](/docs/projects/variables/sensitive-variables). | | `MyCertificate.CertificatePem` | The PEM representation of the certificate (i.e. the PublicKey with header\footer). | | `MyCertificate.PrivateKeyPem` | The PEM representation of the private key (i.e. the PrivateKey with header\footer). | | `MyCertificate.ChainPem` | The PEM representation of any chain certificates (intermediate or certificate-authority). This variable does not include the primary certificate. | | `MyCertificate.Subject` | The X.500 distinguished name of the subject | | `MyCertificate.SubjectCommonName` | The un-attributed subject common name | | `MyCertificate.Issuer` | The X.500 distinguished name of the issuer | | `MyCertificate.NotBefore` | NotBefore date | 2016-06-15T13:45:30.0000000-07:00 | `MyCertificate.NotAfter` | NotAfter date | 2019-06-15T13:45:30.0000000-07:00 ### Example usage Given the certificate variable `MyCertificate`, you can access the certificate thumbprint in a script like this:
      PowerShell ```powershell PowerShell Write-Host $OctopusParameters["MyCertificate.Thumbprint"] ```
      Bash ```bash thumbprint=$(get_octopusvariable "MyCertificate.Thumbprint") echo "$thumbprint" ```
      It's possible to write the PEM representation of the certificate to a file for use directly with a web server e.g. Apache, or a reverse proxy like NGINX. In bash, the script looks like this: ```bash CERT=$(get_octopusvariable "MyCertificate.CertificatePem") echo "$CERT" > my_cert.crt ``` If your certificate also contains any chain certificates (e.g. intermediate or certificate authority), they can be written to a file that contains the primary certificate too. The following example shows how to do so in bash: ```bash CERT=$(get_octopusvariable "MyCertificate.CertificatePem") CHAIN=$(get_octopusvariable "MyCertificate.ChainPem") COMBINED_CHAIN="$CERT\n$CHAIN" echo -e "$COMBINED_CHAIN" > my_combined.crt ``` If your certificate also has a private key that you need to export, you can use the `PrivateKeyPem` property using bash: ```bash KEY=$(get_octopusvariable "MyCertificate.PrivateKeyPem") echo "$KEY" > ssl.key ``` ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # Worker Pool variables Source: https://octopus.com/docs/projects/variables/worker-pool-variables.md Worker pool variables are [variables](/docs/projects/variables/) which can be used to select where a deployment or a [runbook](/docs/runbooks/) is executed. Steps that use workers can specify a worker pool directly on the step or have the step depend on a worker pool variable. Before you can use worker pool variables, you must set up your [worker](/docs/infrastructure/workers/) and [worker pool](/docs/infrastructure/workers/worker-pools) infrastructure. In Octopus, you can [scope](/docs/projects/variables/getting-started/#scoping-variables) worker pools to: - [Environments](/docs/infrastructure/environments) - [Processes](/docs/projects/deployment-process) - [Steps](/docs/projects/steps) - [Channels](/docs/releases/channels) ## Add and create worker pool variables 1. Enter the variable name and select **Open Editor** select the **Change Type** drop-down and select **worker pool**. :::figure ![Add worker pool variable](/docs/img/projects/variables/images/workerpoolvariable-add.png) ::: 2. In the **Add Variable** window, it lists all the available worker pools. Select the worker pool and then define the scope of the worker pool. :::figure ![Add worker pool variable type](/docs/img/projects/variables/images/workerpoolvariable-changetype.png) ::: 3. If required, add multiple values, binding each to the required scope. Worker pool variables can not be scoped to target tags or targets as the pool is resolved during the planning phase of the deployment. ## Step Configuration :::div{.hint} Worker pool variables need to be configured on **all steps** in your deployment process that requires it. ::: By default, deployment steps are not configured to run on a worker pool set by a variable, and you will need to change your deployment step to the required variable. 1. Open step and configure the deployment step to run on a worker. 2. Select **Runs on a worker from a pool selected via a variable**. 3. Pick the worker pool variable. :::figure ![Select the worker pool variable](/docs/img/projects/variables/images/workerpoolvariable-selection.png) ::: 4. Save the step. ## Worker pool variable examples Worker pool variables have multiple use cases for consideration during set up. The benefit of worker pool variables is that you can use them separately or combine the examples. ### Environment The most common example would be to use separate environment-specific worker pools for development, test, and production. Often these environments sit in different network segments, and often production is in the cloud or in a DMZ, which would help with security. :::figure ![Environment-specific worker pool variables](/docs/img/projects/variables/images/workerpoolvariable-environments.png) ::: ### Performance and role separation Worker pool variables enable different worker pools for different steps, for example, you could use a separate worker pool for application deployments and a different worker pool for database deployments. Running deployment tasks in parallel using different worker pools can enable better concurrency of tasks. Using worker pool variables for multiple tasks using parallel deployments on different workers would increase concurrency and performance of your deployment process. Licensing requirements of software installed on workers may mean that the software can't be justified on all workers. You may choose to install the software on a small subsection of workers. :::figure ![Separation of roles for worker pool variables](/docs/img/projects/variables/images/workerpoolvariable-roleseparation.png) ::: ### Network and security [Network isolation](https://en.wikipedia.org/wiki/Network_segmentation) or [DMZ or a perimeter network](https://en.wikipedia.org/wiki/DMZ_(computing)) are common for most companies. They are considered best practices for most scenarios to control and manage the flow of your network and keep items separated. Using worker pool variables will allow you to control where your deployment or scripts run, which will ensure scripts or deployments can't access networks they may not be permitted to access. :::figure ![Worker pool variable network isolation](/docs/img/projects/variables/images/workerpoolvariable-networkisolation.png) ::: ### Multi-cloud and multi-region workers [Multi-cloud](https://en.wikipedia.org/wiki/Multicloud) and Multi-Region strategies are commonplace. It's common to have workloads spread over multiple clouds and locations such as: - [Azure](https://azure.microsoft.com/en-us/) - [AWS](https://aws.amazon.com/) - [GCP](https://cloud.google.com/) - On-Premises - Private Cloud :::figure ![multi-cloud worker pool variable](/docs/img/projects/variables/images/workerpoolvariable-multicloud.png) ::: ## Older versions * Worker pool variables are available from Octopus Deploy **2020.1.0** onwards. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) - [Worker blog posts](https://octopus.com/blog/tag/workers/1) # Moving version control Source: https://octopus.com/docs/projects/version-control/moving-version-control.md Version-control is configured per project and is accessed via the **Settings ➜ Version Control** link in the project navigation menu. This page will walk you through moving an existing config as code repository to a new location. ## Moving configuration as code files Switching on version control for your project is a one-way change. You can't move the project back into the Octopus database once it's in a repository. However, you are free to move the configuration to a new folder or repository. You may need to move files from the root of the `.octopus` folder into a sub-folder because you want to divide the application into smaller components. Alternatively, you might be changing your version control strategy and want to move the configuration from the application repository to a stand-alone deployment repository (or vice-versa). Both of these scenarios are covered below. Moving the configuration location will cause a break in the version control history. When you review the history of the files after the move, you will only see the history since the moving date. For older changes, you would need to search for the deleted files to see previous changes. You will also need to pause changes to the deployment process during the move. You will have two copies of the deployment configuration for a short time. The migration process takes a few minutes to complete. The basic process for a move is: - Create a copy of the configuration in the new location - Update the version control settings - Remove the old configuration files ## Moving config files into a folder Before starting the move, make sure you have the latest version of the config files. Create the new folder under the `.octopus` directory and copy the `*.ocl` files into the new folder. Then commit and push your changes. You can then update the version control settings by following the steps below: - Open the project in Octopus Deploy - Open **Settings ➜ Version Control** - Expand the **Git File Storage Directory** setting and update the folder name - Click **SAVE** to store the changes - Check your process in **Deployments ➜ Process** You can now delete the files from the old location. :::div{.hint} If you receive the error `Branch has not been initialized` it is likely you haven't copied the configuration files to the new location, didn't publish your changes, or have mistyped the directory name. ::: ## Moving config files to a new repository You can move your configuration to a brand new or existing repository. Before starting the move, make sure you have the latest version of the config files. Create an `.octopus` folder in the root of your repository if it doesn't already exist, and decide whether you will add a sub-folder for the configuration files. Copy the latest version of the `*.ocl` files into the new location. Then commit and push your changes. - Open the project in Octopus Deploy - Open **Settings ➜ Version Control** - Expand **Git Repository** and enter the new **URL** - Adjust the **Default Branch Name** if the new repository has a different main branch - Expand **Authentication** - - Add a **Username** - - Enter a personal access token in the **Password** field - Click **TEST** to check your connection to the repository - Click **SAVE** to store the changes - Check your process in **Deployments ➜ Process** You can now delete the files from the old location. :::div{.hint} When you use the **TEST** button to check your connection, the most common issues will be an incorrect username or using your password instead of a personal access token. ::: # Backup MySQL database Source: https://octopus.com/docs/runbooks/runbook-examples/databases/backup-mysql-database.md There are many different ways to backup a MySQL database. In this case, we will use mysqldump utility provided by MySQL to dump data and table structures from a specific database. It requires the deployment target to have the MySQL installation binaries and local access to the MySQL instance where the database is hosted. In the following example, we'll use the [MySQL - Backup Database](https://library.octopus.com/step-templates/4fa6d062-d4da-4a02-849e-dec804554453/actiontemplate-mysql-backup-database) community step template. ## Create the Runbook 1. To create a runbook, navigate to **Project ➜ Operations ➜ Runbooks ➜ Add Runbook**. 2. Give the Runbook a name and click **SAVE**. 3. Click **DEFINE YOUR RUNBOOK PROCESS**, then click **ADD STEP**. 4. Add a new step template from the community library called **MySQL - backup database**. 5. Fill out all the parameters in the step. It's best practice to use [variables](/docs/projects/variables) rather than entering the values directly in the step parameters: | Parameter | Description | Example | | ------------- | ------------- | ------------- | | Server | Name or IP of the MySQL server | MySQL1 | | Username | Username with rights to create a database | root | | Password | Password for the user account | MyGreatPassword! | | Database Name | Name of the database to create | MyDatabase | | Port | Port number for the MySQL server | 3306 | | Use SSL | Whether to use the SSL protocol | Checked for True, unchecked for False | | MySQL Path | Path to binaries | C:\Program Files\MySQL\MySQL Server 5.6\bin | | Backup Directory | Location to store backup file | C:\backups\ | This will check if a database exists and backup a database on the MySQL instance. # GCP Source: https://octopus.com/docs/runbooks/runbook-examples/gcp.md Octopus is great for helping you perform repeatable and controlled deployments of your applications into [Google Cloud](https://cloud.google.com/gcp) (GCP), but you can also use it to manage your infrastructure hosted there too. Runbooks can be used to help automate this without having to create new deployment releases. Typical routines could be: - Creating a Compute Engine VM instance. - Creating a Kubernetes (k8s) cluster - Creating IP addresses. - Creating and managing load-balancers and target-pools. # Hardening Windows Source: https://octopus.com/docs/runbooks/runbook-examples/routine/hardening-windows.md Highly regulated industries such as finance need to make sure the Operating Systems (OS) are secure and protected from data breaches. This often requires strict controls be implemented to reduce their attack surface. In Windows Active Directory environments, this type of hardening can be performed by implementing Group Policy Objects (GPO). In cases where GPO isn't an option, such as non-domain joined cloud servers, you could use a runbook to harden your Windows installation. :::div{.warning} Every installation is different and the examples provided here are only intended to demonstrate functionality. Ensure you are complying with your company's security policies when you configure any infrastructure and that your specific implementation matches your needs. ::: ## Create the runbook To create a runbook to harden your Windows installation: 1. From your project's overview page, navigate to **Operations ➜ Runbooks**, and click **ADD RUNBOOK**. 1. Give the runbook a Name and click **SAVE**. 1. Click **DEFINE YOUR RUNBOOK PROCESS**, and then click **ADD STEP**. 1. Click **Script**, and then select the **Run a Script** step. 1. Give the step a name. 1. Choose the **Execution Location** on which to run this step. 1. In the **Inline source code** section, select **PowerShell** and add the following code: :::div{.warning} The following script is an example of what can be done, please review the script carefully before using. ::: ```powershell Function Test-RegistryValue { param( [Alias("PSPath")] [Parameter(Position = 0, Mandatory = $true, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)] [String]$Path , [Parameter(Position = 1, Mandatory = $true)] [String]$Name ) process { Write-Verbose "Path:$($Path) Name:$($Name)" if (Test-Path $Path) { Write-Verbose "Path is here" $Key = Get-Item -LiteralPath $Path Write-Verbose "Key: $($Key)" if ($null -ne $Key.GetValue($Name, $null)) { $true } else { $false } } else { Write-Verbose "NOT HERE" $false } } } $directory = "DIRECTORY" $newItems = @( [pscustomobject]@{name='FEATURE_ENABLE_PRINT_INFO_DISCLOSURE_FIX';path='hklm:\SOFTWARE\WOW6432Node\Microsoft\Internet Explorer\Main\FeatureControl\';value=$directory} [pscustomobject]@{name="iexplore.exe";path='hklm:\SOFTWARE\WOW6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_ENABLE_PRINT_INFO_DISCLOSURE_FIX\';value='00000001'} [pscustomobject]@{name="FEATURE_ENABLE_PRINT_INFO_DISCLOSURE_FIX";path='hklm:\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\';value=$directory} [pscustomobject]@{name="iexplore.exe" ;path='hklm:\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_ENABLE_PRINT_INFO_DISCLOSURE_FIX\';value='00000001'} [pscustomobject]@{name="FEATURE_ALLOW_USER32_EXCEPTION_HANDLER_HARDENING";path='hklm:\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\';value=$directory} [pscustomobject]@{name="iexplore.exe";path='hklm:\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_ALLOW_USER32_EXCEPTION_HANDLER_HARDENING\';value='00000001'} [pscustomobject]@{name="FEATURE_ALLOW_USER32_EXCEPTION_HANDLER_HARDENING";path='hklm:\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\';value=$directory} [pscustomobject]@{name="iexplore.exe";path='hklm:\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_ALLOW_USER32_EXCEPTION_HANDLER_HARDENING\';value='00000001'} [pscustomobject]@{name="Virtualization";path='hklm:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\';value=$directory} ) Write-Verbose "Microsoft Internet Explorer Cumulative Security Update (MS15-124)" #####" ###Impact: A remote, unauthenticated attacker could exploit these vulnerabilities to conduct cross-site scripting attacks, elevate their privileges, execute arbitrary code or cause a denial of service condition on the targeted system ### foreach ($newItem in $newItems) { $keyExists = Test-RegistryValue $newItem.path $newItem.name If ($keyExists -eq $false) { Write-Verbose "New item: $($newItem.path)$($newItem.name) value:$($newItem.value)" try { if ($newItem.value -eq $directory){ New-Item -Name $newItem.name -Path $newItem.path -type Directory } else { New-Item -Name $newItem.name -Path $newItem.path -Value $newItem.value } } catch { Write-Verbose "Error writing item, most likely it already exists. " } } } Write-Verbose "Enabled Cached Logon Credential" ### Impact : Unauthorized users can gain access to this cached information, thereby obtaining sensitive logon information Set-ItemProperty -Path 'hklm:\Software\Microsoft\Windows Nt\CurrentVersion\Winlogon' -Name "CachedLogonsCount" -Value "0" Write-Verbose "Windows Update For Credentials Protection and Management (Microsoft Security Advisory 2871997)" ### Impact : If this vulnerability is successfully exploited, attackers can steal credentials of the system try { New-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\Session Manager\' -Name "CWDIllegalInDllSearch" -Value "00000001" -PropertyType "DWord" } catch { Write-Verbose "Error writing CWDIllegalInDllSearch, probably already exists" } Write-Verbose "Microsoft Windows Security Update Registry Key Configuration Missing (ADV180012) (Spectre/Meltdown Variant 4) " ###Impact : An attacker who has successfully exploited this vulnerability may be able to read privileged data across trust boundaries. Vulnerable code patterns in the operating system (OS) or in applications could allow an attacker to exploit this vulnerability. In the case of Just-in-Time (JIT) compilers, such as JavaScript JIT employed by modern web browsers, it may be possible for an attacker to supply JavaScript that produces native code that could give rise to an instance of CVE-2018-3639# Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management' -Name "FeatureSettingsOverride" -Value "00000008" Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management' -Name "FeatureSettingsOverrideMask" -Value "00000003" Write-Verbose "Allowed Null Session" ### Impact : Unauthorized users can establish a null session and obtain sensitive information, such as usernames and/or the share list, which could be used in further attacks against the host Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\LSA' -Name "RestrictAnonymous" -Value "00000001" Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\LSA' -Name "everyoneincludesanonymous" -Value "00000000" Set-ItemProperty -Path 'hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer' -Name "ForceActiveDesktopOn" -Value "00000000" Set-ItemProperty -Path 'hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer' -Name "NoActiveDesktopChanges" -Value "00000001" Set-ItemProperty -Path 'hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer' -Name "NoActiveDesktop" -Value "00000001" Set-ItemProperty -Path 'hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer' -Name "ShowSuperHidden" -Value "00000001" Write-Verbose "Microsoft Windows Explorer AutoPlay Not Disabled" ###Impact: Exploiting this vulnerability can cause malicious applications to be executed unintentionally at escalated privilege ### try { New-ItemProperty -Path 'hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer' -Name "NoDriveTypeAutoRun" -Value "00000255" -PropertyType "DWord" } catch { Write-Verbose "Error writing NoDriveTypeAutoRun, probably already exists" } Set-ItemProperty -Path 'hklm:\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer' -Name "NoDriveTypeAutoRun" -Value "00000001" Write-Verbose "Windows Registry Setting To Globally Prevent Socket Hijacking Missing" ###Impact: If this registry setting is missing, in the absence of a SO_EXCLUSIVEADDRUSE check on a listening privileged socket, local unprivileged users can easily hijack the socket and intercept all data meant for the privileged process ##### Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Services\AFD\Parameters' -Name "ForceActiveDesktopOn" -Value "00000001" Write-Verbose "Disable TLS 1.0" ###Impact: An attacker can exploit cryptographic flaws to conduct man-in-the-middle type attacks or to decryption communications### #Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client' -Name "Enabled" -Value "00000000" #Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client' -Name "DisabledByDefault" -Value "00000001" Write-Verbose "Disable TLS 1.1" try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -Name "DisabledByDefault" -Value "0" -Type DWord } catch { Write-Verbose "Error updating TLS 1.1 DisabledByDefault, key probably doesn't exist" } Write-Verbose "Disable SSL v3" ###Impact: SSL 3.0 is an obsolete and insecure protocol. ###Encryption in SSL 3.0 uses either the RC4 stream cipher, or a block cipher in CBC mode. ###RC4 is known to have biases, and the block cipher in CBC mode is vulnerable to the POODLE attack. try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -Name "DisabledByDefault" -Value "00000001" } catch { Write-Verbose "Error updating TLS 3.0 Client, key probably doesn't exist" } try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server' -Name "Enabled" -Value "00000000" } catch { Write-Verbose "Error updating TLS 3.0 Server, key probably doesn't exist" } Write-Verbose "Disable RC4 Protocols" try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128' -Name "Enabled" -Value "00000000" } catch { Write-Verbose "Error updating Ciphers\RC4 128/128, key probably doesn't exist" } try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128' -Name "Enabled" -Value "00000000" } catch { Write-Verbose "Error updating Ciphers\RC4 40/128, key probably doesn't exist" } try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128' -Name "Enabled" -Value "00000000" } catch { Write-Verbose "Error updating Ciphers\RC4 56/128, key probably doesn't exist" } try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168' -Name "Enabled" -Value "00000000" } catch { Write-Verbose "Error updating Ciphers\Triple DES 168, key probably doesn't exist" } try { Set-ItemProperty -Path 'hklm:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168/168' -Name "Enabled" -Value "00000000" } catch { Write-Verbose "Error updating Ciphers\Triple DES 168/168, key probably doesn't exist" } Write-Verbose "Microsoft Windows FragmentSmack Denial of Service Vulnerability (ADV180022)" ###Impact: A system under attack would become unresponsive with 100% CPU utilization but would recover as soon as the attack terminated. ### Set-NetIPv4Protocol -ReassemblyLimit 0 Set-NetIPv6Protocol -ReassemblyLimit 0 Write-Verbose "MS15-011 Hardening UNC Paths Breaks GPO Access - Microsoft Group Policy Remote Code Execution Vulnerability (MS15-011)" ###Impact: The vulnerability could allow remote code execution if an attacker convinces a user with a domain-configured system to connect to an attacker-controlled network ### try { Set-ItemProperty -Path 'hklm:\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths' -Name "\\*\netlogon" -Value "RequireMutualAuthentication=1, RequireIntegrity=1, RequirePrivacy=1" } catch { Write-Verbose "Error updating NetworkProvider\HardenedPaths - netlogon key probably doesn't exist" } try { Set-ItemProperty -Path 'hklm:\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths' -Name "\\*\sysvol" -Value "RequireMutualAuthentication=1, RequireIntegrity=1, RequirePrivacy=1" } catch { Write-Verbose "Error updating NetworkProvider\HardenedPaths - sysvol key probably doesn't exist" } Write-Verbose "Windows Update for Credentials Protection and Management (Microsoft Security Advisory 2871997)" ### IMPACT If this vulnerability is successfully exploited, attackers can steal credentials of the system. ### try { Set-ItemProperty -Path 'hklm:\System\CurrentControlSet\Control\SecurityProviders\WDigest' -Name "UseLogonCredential" -Value "0" } catch { Write-Verbose "Error updating SecurityProviders\WDigest key probably doesn't exist" } Write-Verbose "Enabling strong cryptography for .NET V4..." #x64 try { Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord } catch { Write-Verbose "Error updating Wow6432Node SchUseStrongCrypto, key probably doesn't exist" } #x86 try { Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord } catch { Write-Verbose "Error updating SchUseStrongCrypto, key probably doesn't exist" } ``` Now you can have the peace of mind that the OS has been hardened. ## Samples We have an [Octopus Admin](https://oc.to/OctopusAdminSamplesSpace) Space on our Samples instance of Octopus. You can sign in as `Guest` to take a look at these examples and more Runbooks in the `Deployment Target Management` project. # Users and teams Source: https://octopus.com/docs/security/users-and-teams.md Octopus Deploy provides the most value when it is used by your whole team. Developers and testers might be allowed to deploy specific projects to pre-production environments, but not production environments. Stakeholders might be permitted to view certain projects, but not modify or deploy them. To support these scenarios, Octopus supports a permissions system based around the concept of **Teams**. [Getting Started - Users, Roles & Teams](https://www.youtube.com/watch?v=f_JPU7sAE8M) You can manage users and teams in the Octopus Web Portal: - For users, navigate to **Configuration ➜ Users**. - For teams, navigate to **Configuration ➜ Teams**. :::figure ![](/docs/img/security/users-and-teams/images/teams.png) ::: ## User and service accounts **User accounts** are allowed to use both the Octopus Web Portal and the Octopus API, and can authenticate with a username and password, [Active Directory credentials](/docs/security/authentication/active-directory/), or an [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key). [Service accounts](/docs/security/users-and-teams/service-accounts/) are **API-only accounts** that should be used for automated services that integrate with Octopus Deploy, and can only authenticate with an [Octopus API key](/docs/octopus-rest-api/how-to-create-an-api-key/). For more information refer to [Service accounts](/docs/security/users-and-teams/service-accounts). :::div{.success} You should create a different User account for each person that will use Octopus Deploy. You should create a different [Service account](/docs/security/users-and-teams/service-accounts) for each service that will integrate with Octopus Deploy. ::: ### User API Key Management There are some things to be aware of when deleting or disabling an Octopus User: - If a User account is deleted, any associated API keys will also be deleted and stop functioning. - API keys cannot be transferred between Users, so if an in-use key has its associated User account deleted, that API key will no longer function, and a new API key from an active User would need to be created and used. - Additionally, any scripts that reference a deleted API key need to be updated to a new API key. - A disabled user's API keys will not function. Any attempt to use them will throw a `401 unauthorized error` and require the User to be re-enabled. ## Inviting users {#inviting-users} :::div{.warning} This feature is being deprecated for Cloud users. You can follow these instructions to invite users to your [Octopus Cloud instance](/docs/getting-started/managing-octopus-subscriptions/#user-access). ::: To streamline the process of adding multiple users you can use the **User invites** feature to generate one or more unique registration codes bound to one or more existing teams. These links can then be issued to users so that they can register their own details and be given automatic permissions for the team(s) the codes are bound to. :::figure ![](/docs/img/security/users-and-teams/images/user-invites.png) ::: :::div{.hint} Prior to version 4.0 this feature was accessed via the **Invite users** button on the **Users** page ::: In the example above, we are generating codes for the **Octopus Administrators** team, so anyone who uses one of the codes will automatically join that team when they have completed registration. :::div{.warning} Invite codes are only valid for 48-hours after being generated, so make sure you issue them quickly before the expire. ::: ## Creating teams {#create-teams} Creating teams lets you assign the same roles to groups of users. Users can be added to or removed from multiple teams, making it easier to manage permissions for specific users and teams. You can create new teams by using the **Add Team** button. For example, we can create a team that gives Anne and Bob access to view projects and deploy them to pre-production environments by assigning the role **Project deployer** to the team. We limit which projects and environments these permissions apply to by adding specific projects and environments to the team. :::figure ![](/docs/img/security/users-and-teams/images/devdeployerteam.png) ::: ### Restricting project and project group access When specifying both `Project Groups` and `Projects` filters please be aware that both these filters complement each other. To better illustrate these filters in action let's consider the following project structure: | Project Groups | Projects | | -------------- | ---------------------------- | | GroupA | Project1, Project2, Project3 | | GroupB | Project4, Project5 | The following table illustrates the combination of possible permissions when specifying both `Project Groups` and `Projects` filters: | `Project Groups` | `Projects` | Result | | ---------------- | ---------- | ---------------------------------------- | | `Empty` | `Empty` | Project1, Project2, Project3, Project4 and Project5 | | `Empty` | Project1 | Project1 | | GroupB | `Empty` | Project4 and Project5 | | GroupA | Project5 | Project1, Project2, Project3 and Project5 | | GroupB | Project4 | Project4 and Project5 | ## Roles Team members can be assigned the following roles: - **Project viewer**: Project viewers have read-only access to a project. They can see the project in their dashboard, view releases and deployments. - **Project contributor**: Project viewer, plus: editing and viewing variables and deployment steps. - **Project lead**: Project contributor, plus: create releases (but not deploy them). - **Project deployer**: Project contributor, plus: deploying releases (but not creating releases). - **Environment viewer**: View environments and their machines, but not edit them. - **Environment manager**: View and edit environments and their machines. Note that project leads can create releases but not deploy them, while project deployers can deploy releases but not create them - this allows you assign these permissions independently. If you need members to be able to both create and deploy releases, you can add both roles. The roles assigned by a team can be scoped by project or environment. :::div{.hint} You can learn more about User Roles in our [documentation](/docs/security/users-and-teams/user-roles). ::: ## System teams Octopus Deploy comes with several built-in teams. The **Everyone** team always contains all users, but you can assign different roles to members of this Team (for example, you might allow everyone to view all projects and environments, but not edit anything). Out of the box, **Everyone** members can do nothing. The second team is **Octopus Administrators.** Members of this team have permission to configure system level concerns of Octopus. You can add or remove members from this team. We recommend only adding a few key users to this team. The third team is **Octopus Managers.** That can manage a smaller subset of system-level functions reserved. You can also add or remove members from this team. We recommend adding users to this team who should be able to manage teams and other top-level things in Octopus, but not be able to change how Octopus is hosted. The fourth team is **Space Managers** Members of this team can do everything in a given space. Out of the box the initial user added to **Octopus Administrators** is also added as a **Space Manager**. If you do not need granular access for what users need to do with Octopus relating to Projects, Tenants, Environments you can add them as **Space Managers**. # Setting the Concurrency Tag Source: https://octopus.com/docs/tenants/guides/tenants-sharing-machine-targets/setting-the-concurrency-tag.md The `Octopus.Task.ConcurrencyTag` system variable gives us finer control over how tasks run concurrently in Octopus. Like the variable that allows you to bypass the deployment mutex, this variable should be handled with care. Octopus uses this variable to determine which tasks can run concurrently. For non-tenanted deployments, it has the value `#{Octopus.Project.Id}/#{Octopus.Environment.Id}`. Tenanted deployments use the value `#{Octopus.Deployment.Tenant.Id}/#{Octopus.Project.Id}/#{Octopus.Environment.Id}`. If we change the value for a tenanted deployment to `#{Octopus.Project.Id}/#{Octopus.Environment.Id}`, the tenanted deployment tasks will run sequentially instead of concurrently. In this scenario, we want to run one task per hosting group concurrently. We can do that by scoping different values to the Hosting Group tenant tags. :::figure ![](/docs/img/tenants/guides/tenants-sharing-machine-targets/variable.png) ::: Now the deployments for each tenant in the same hosting group will run sequentially. Let's see how that changes the deployment tasks. Previous     Next # Tenant lifecycles Source: https://octopus.com/docs/tenants/tenant-lifecycles.md You can control which [releases](/docs/releases/) will be deployed to certain tenants using [channels](/docs/releases/channels). :::figure ![](/docs/img/tenants/images/channel-restrict-by-tenant.png) ::: This page discusses some scenarios for controlling release promotion for tenants: - Implementing an early access program (EAP) - Restricting test releases to the test team - Pinning tenants to a release ## Implementing an early access program {#early-access-program} Quite often, you want to involve certain customers in testing early releases of major upgrades. By using a combination of [channels](/docs/releases/channels/) and [tenant tags](/docs/tenants/tenant-tags) you can implement an opt-in early access program using tenants, making sure the beta releases are only deployed to the correct tenants and environments. ### Step 1: Create the lifecycle {#eap-step-1-lifecycle} Firstly we will create a new [lifecycle](/docs/releases/lifecycles). :::figure ![](/docs/img/tenants/images/multi-tenant-limited-lifecycle.png) ::: :::div{.hint} Learn more about [defining a limited lifecycle for your test channel](/docs/releases/channels). ::: ### Step 2: Configure the tenant tags {#eap-step-2-configure-tenant-tag} Add a new tag called **2.x Beta** to a new or existing tenant tag set. :::figure ![](/docs/img/tenants/images/multi-tenant-beta-tenant-tags.png) ::: ### Step 3: Select the tenants participating in the beta program {#eap-step-3-choose-tenants} Add the **2.x Beta** tag to one or more tenants who are included in the beta program :::figure ![](/docs/img/tenants/images/multi-tenant-beta-tester.png) ::: ### Step 4: Configure a channel for the beta program {#eap-step-4-configure-channel} Create a channel called **2.x Beta** and restrict its use to tenants tagged with **2.x Beta** :::figure ![](/docs/img/tenants/images/multi-tenant-beta-channel.png) ::: ### Step 5: Create a beta release {#eap-step-5-create-release} Create a new release of the project choosing the **2.x Beta** channel for the release, and give it a [SemVer](http://semver.org/) version number like **2.0.0-beta.1** :::figure ![](/docs/img/tenants/images/multi-tenant-create-beta-release.png) ::: ### Step 6: Deploy {#eap-step-6-deploy} Now when you are deploying **2.0.0-beta.1**, you will be able to select tenants participating in the Beta program and prevent selecting tenants who are not participating. :::figure ![](/docs/img/tenants/images/multi-tenant-deploy-beta-tenants.png) ::: ## Restricting test releases {#restricting-test-releases} You may decide to use channels as a safety measure, to restrict test releases to a limited set of test tenants. By using a combination of [channels](/docs/releases/channels/) and [tenant tags](/docs/tenants/tenant-tags) you can make sure test releases are only deployed to the correct tenants and environments. ### Step 1: Create the lifecycle {#test-step-1-lifecycle} Firstly we will create a new [lifecycle](/docs/releases/lifecycles). :::figure ![](/docs/img/tenants/images/multi-tenant-limited-lifecycle.png) ::: :::div{.hint} Learn more about [defining a limited lifecycle for your test channel](/docs/releases/channels). ::: ### Step 2: Configure the tenant tags {#test-step-2-configure-tenant-tag} Add a new tag called **Tester** to a new or existing tenant tag set. :::figure ![](/docs/img/tenants/images/multi-tenant-tester-tenant-tags.png) ::: ### Step 3: Select the tenants participating in the test program {#test-step-3-choose-tenants} Add the **Tester** tag to one or more tenants who are included in the test program :::figure ![](/docs/img/tenants/images/multi-tenant-tester.png) ::: ### Step 4: Configure a channel for the test program {#test-step-4-configure-channel} Create a channel called **1.x Test** and restrict its use to tenants tagged with **Tester** :::figure ![](/docs/img/tenants/images/multi-tenant-test-channel.png) ::: ### Step 5: Create a test release {#test-step-5-create-release} Now create a release in the new **1.x Test** channel giving it a [SemVer](http://semver.org/) pre-release version like **1.0.1-alpha.19** indicating this is a pre-release of **1.0.1** for testing purposes. :::figure ![](/docs/img/tenants/images/multi-tenant-create-test-release.png) ::: ### Step 6: Deploy {#test-step-6-deploy} When you deploy this release, you will be able to choose from the limited set of tenants tagged with the `Tester` tag and deploy into the test environments, but no further. :::figure ![](/docs/img/tenants/images/multi-tenant-deploy-test-tenants.png) ::: ## Pinning tenants to a release {#pinning-tenants} Often, you will want to disable/prevent deployments to a tenant during a period of time where the customer wants guarantees of stability. You can prevent deployments to tenants using a combination of [channels](/docs/releases/channels/) and [tenant tags](/docs/tenants/tenant-tags). ### Step 1: Create the upgrade ring/pinned tag {#pinned-step-1-configure-tenant-tag} Add a new tag called **Pinned** to a new or existing tenant tag set with a color that stands out. :::figure ![](/docs/img/tenants/images/multi-tenant-upgrade-ring-pinned.png) ::: ### Step 2: Configure the channels to prevent deployments to pinned tenants Now we will configure the project channels to make sure we never deploy any releases to pinned tenants. We will do this using a similar method to the [EAP Beta program](#early-access-program), but in this case, we are making sure none of the channels allow deployments to tenants tagged as pinned. 1. Find the channel in your project that represents normal releases - this is called **1.x Normal** in this example. 1. Restrict deployments of releases in this channel to the following tenant tags: - **Early adopter** - **Stable** - **Tester** 1. Ensure the **Pinned** tenant tag is not selected on any channel. :::figure ![](/docs/img/tenants/images/multi-tenant-pinned-tenants.png) ::: ### Step 3: Prevent deployments to a tenant by tagging them as upgrade ring/pinned Find a tenant you want to pin and apply the **Pinned** tag, removing any other tags. This will prevent you from deploying any releases to this tenant. ![](/docs/img/tenants/images/multi-tenant-pinned-tenant-upgrade-ring.png) # octopus deployment-target polling-tentacle Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-polling-tentacle.md Manage Polling Tentacle deployment targets in Octopus Deploy ```text Usage: octopus deployment-target polling-tentacle [command] Available Commands: help Help about any command list List Polling Tentacle deployment targets view View a Polling Tentacle deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target polling-tentacle [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target polling-tentacle list ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Polling Tentacles over WebSockets Source: https://octopus.com/docs/infrastructure/deployment-targets/tentacle/windows/polling-tentacles-web-sockets.md :::div{.warning} Connecting Polling Tentacles to an [Octopus Cloud](/docs/octopus-cloud) instance over WebSockets is not currently supported. ::: [(TCP) Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) can be setup to operate over HTTPS (Secure WebSockets) instead of raw TCP sockets. The advantage is that the port can be shared with another website (e.g. IIS or Octopus itself). The downside is the setup is a little more complicated and network communications are slightly slower. If there is an available port, we recommend using [TCP Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles). If only port 443 and 80 are available, it is possible to run Octopus Web UI just on 443 (HTTPS) and a TCP Polling Tentacle on port 80. Even though it is using port 80, which is by convention HTTP, the Tentacle communications will still use TLS and be secure. ## Server setup The following prerequisites must be met to use this feature: - Octopus Server must be self-hosted, and not an [Octopus Cloud](/docs/octopus-cloud) instance. - Both the Octopus Server and Tentacle must be running Windows 2012 or later. - The server expects an SSL/TLS connection, so SSL offloading is not supported. - The other application using the port must be using the standard Windows networking library ([HTTP.sys](https://docs.microsoft.com/en-us/iis/get-started/introduction-to-iis/introduction-to-iis-architecture#hypertext-transfer-protocol-stack-httpsys)). This includes IIS, .NET apps and Octopus itself. However, it does not include any applications that use non-HTTP.sys TCP/IP or HTTP stacks. Check your product's documentation for more information. - The other application must be using HTTPS on that port. ### Listen address The first step is to select a URL listen prefix. HTTP.sys handles the initial TLS handshake and then routes the request based on the HTTP headers. This means that the request can be routed based on IP, hostname and path. See the [UrlPrefix documentation](https://msdn.microsoft.com/en-us/library/windows/desktop/aa364698(v=vs.85).aspx) for the syntax and how routes are matched. In most cases, we recommend using `+` as the host name and a unique string for path. This will ensure that address takes the highest precedence. For example, to listen on port 443: `https://+:443/OctopusComms`. The path should not be used by the other applications listening on the port. An SSL certificate must be configured for the chosen address and port (the path is ignored). If an existing application (eg the Octopus Web UI) is already using that address and port, no extra configuration is required. If not see [Certificate section below](#certificate). Once selected the Octopus Server can be configured to listen on that prefix using the following commands: ``` .\Octopus.Server.exe service --instance OctopusServer --stop .\Octopus.Server.exe configure --instance OctopusServer --commsListenWebSocket https://+:443/OctopusComms .\Octopus.Server.exe service --instance OctopusServer --start ``` ### Testing To confirm that the server is successfully configured, open the listen address in your browser. If you are using `+` for the host, replace that with `localhost`. For example `https://localhost:443/OctopusComms`. You should get a page titled `Octopus Server configured successfully`. If you get a connection refused or reset error, check the address and port and ensure a certificate is [configured](#certificate) for that address. If you get the other application that is listening on that port, ensure that your listen address has a [higher precedence](https://msdn.microsoft.com/en-us/library/windows/desktop/aa364698(v=vs.85).aspx) and that the server successfully bound to that address in the [server log file](/docs/support/log-files). If you encounter a certificate warning, ignore it and continue. This warning is due to the certificate not having a valid chain of trust back to a trusted certificate authority. Octopus [trusts certificates directly](https://octopus.com/blog/why-self-signed-certificates). ## Tentacle setup The setup of a WebSocket Tentacle is the same as a TCP Polling Tentacle, except for the thumbprint and the command line option to specify the communications address. ### Registering When issuing the `register-with` command during Tentacle setup, omit the `--server-comms-port` parameter and specify the `--server-web-socket
      ` parameter. The address to use is the listen prefix (replacing `+` with the hostname) and `https` replaced with `wss` (e.g. `wss://example.com:443/OctopusComms`). For example: ```powershell .\Tentacle.exe register-with --instance MyInstance --server "https://example.com/" --server-web-socket "wss://example.com:443/OctopusComms" --comms-style TentacleActive --apikey "API-YOUR-KEY" --environment "Test" --role "Web" ``` ### Changing an existing Tentacle To change an existing Tentacle to poll using WebSockets, run the following commands: ```powershell .\Tentacle.exe service --instance MyInstance --stop .\Tentacle.exe configure --reset-trust .\Tentacle.exe register-with --instance MyInstance --server "https://example.com/" --server-web-socket "wss://example.com:443/OctopusComms" --comms-style TentacleActive --apikey "API-YOUR-KEY" --environment "Test" --role "Web" .\Tentacle.exe service --instance MyInstance --start ``` ### High Availability When issuing the `poll-server` command to add additional nodes to poll, omit the `--server-comms-port` parameter and specify the `--server-web-socket
      ` parameter. For example: ```powershell poll-server --instance MyInstance --server "https://example.com/" --server-web-socket "wss://example.com:443/OctopusComms" --apikey "API-YOUR-KEY" ``` ## Certificate Windows will need to be configured with a SSL certificate on the selected address and port. Usually this is done by the other application sharing the port. The certificate does _not_ need have a valid chain of trust to a certificate authority. Therefore, [Self signed certificates](https://octopus.com/blog/why-self-signed-certificates) can be used. The certificate also does not need to match the hostname. It does need to be installed into the Personal certificate store of the Machine account. The easiest way to get the SSL certificate set up is to configure [Octopus to use HTTPS](/docs/security/exposing-octopus/expose-the-octopus-web-portal-over-https) on that address and port. If you need to generate a self-signed certificate, this can be done by issuing the following PowerShell command. Take note of the thumbprint generated. ```powershell New-SelfSignedCertificate -Subject "CN=Example Website" -CertStoreLocation "Cert:\localMachine\My" -KeyExportPolicy Exportable ``` If your chosen certificate has not yet been associated with the selected address and port, use the `netsh` tool to install it. For example: ```powershell netsh http add sslcert ipport=0.0.0.0:443 certhash=966857B08601B9ACA9A9F10E7D469AC521E2CD4B appid='{00112233-4455-6677-8899-AABBCCDDEEFF}' ``` For more details instructions, see Microsoft's [certificate HowTo](https://msdn.microsoft.com/en-us/library/ms733791(v=vs.110).aspx). ## Thumbprints Unlike other Tentacle configurations, the Tentacle must be configured to trust the thumbprint of the SSL certificate and not the thumbprint Octopus uses for other methods of Tentacle communication. This is due to HTTP.sys performing the certificate exchange (not the Octopus Server) and then delegating the connection. Both the Tentacle and server still verify the certificate thumbprint match the trusted thumbprint. # octopus deployment-target polling-tentacle list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-polling-tentacle-list.md List Polling Tentacle deployment targets in Octopus Deploy ```text Usage: octopus deployment-target polling-tentacle list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target polling-tentacle list octopus deployment-target polling-tentacle ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target polling-tentacle view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-polling-tentacle-view.md View a Polling Tentacle deployment target in Octopus Deploy ```text Usage: octopus deployment-target polling-tentacle view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target polling-tentacle view 'EU' octopus deployment-target polling-tentacle view Machines-100 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target ssh Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-ssh.md Manage SSH deployment targets in Octopus Deploy ```text Usage: octopus deployment-target ssh [command] Available Commands: create Create a SSH deployment target help Help about any command list List SSH deployment targets view View a SSH deployment target Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus deployment-target ssh [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target ssh create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Inbound Azure Private Links Source: https://octopus.com/docs/octopus-cloud/inbound-private-links.md Inbound Azure Private Links provide private connectivity from your virtual network to your Octopus Cloud instance. They simplify network architecture and secure the connection between endpoints in Azure by eliminating data exposure to the public Internet. :::figure ![A diagram illustrating your Azure network connected to Octopus Cloud using Inbound Azure Private Link](/docs/img/octopus-cloud/images/inbound-private-link-network-diagram.png) ::: :::div{.hint} [Azure Private Link](https://azure.microsoft.com/en-us/products/private-link) is not a service provided by Octopus Deploy. It is a Microsoft service that Octopus Deploy enables for use with your Octopus Cloud instance. Customers maintain configuration within their own network in order to use Azure Private Links. Octopus Deploy is not responsible for customer configuration. For issues with configuration, please contact Microsoft Support. ::: ## How to access this feature Inbound Azure Private Links are currently in Preview, available to a select group of customers. If you would like to access this feature, please reach out to [our support team](https://octopus.com/support) so we can discuss how best to meet your private networking requirements. We are working through a waitlist and will be in touch when we are ready to onboard you. ## Configuring an Azure Private Endpoint Once you have the feature enabled for your account, you can start using your private link by getting your Azure Private Endpoint set up. To do this, you'll need the following: 1. The alias previously provided by us when configuring the feature for your account. 2. The DNS prefix that your Octopus Cloud instance uses. 3. A virtual network, subnet and resource group for the Azure Private Endpoint to reside in. With all above steps satisfied, you can create your Private Endpoint by: 1. In the Azure Portal, navigate to your target resource group. 2. Click "Create" and search for "Private Endpoint", clicking the result from Microsoft. 3. Ensure the pre-filled Subscription and resource group are correct. 4. Give the new Private Endpoint a name and either accept or customize the generated Network Interface Name. Click "Next". :::figure ![An example of how to fill in the basics tab while creating a private endpoint in the Azure Portal](/docs/img/octopus-cloud/images/create-private-endpoint-basics.png) ::: 5. Select "Connect to an Azure resource by resource ID or alias" and paste the provided alias into the displayed field. 6. Enter your Octopus Cloud instance's DNS prefix into the Request message field. Click "Next". :::figure ![An example of how to fill in the resource tab while creating a private endpoint in the Azure Portal](/docs/img/octopus-cloud/images/create-private-endpoint-resource.png) ::: 7. Select the virtual network and subnet for the Private Endpoint to use. Click "Next". 8. Complete the remainder of the Private Endpoint creation according to your requirements. Now that you have a Private Endpoint created and a request issued to your Octopus Cloud instance's Private Link Service, you'll need to retrieve the Private Endpoint's resource GUID. By sharing this value with us, we can ensure that we only approve Private Link requests that you configure. Retrieving this value can also be done through the Azure Portal by doing the following: 1. Navigate to the newly created Private Endpoint. 2. Click the "JSON View" button on the right of the page. :::figure ![A screenshot of a Private Endpoint in the Azure Portal showing where the JSON View button is](/docs/img/octopus-cloud/images/private-endpoint-json-view-button.png) ::: 3. In the Resource JSON pane that appears, the value you will want to retrieve is under `properties` and then `resourceGuid` :::figure ![A screenshot of a Private Endpoint's JSON View in the Azure Portal highlighting the ResourceGuid field](/docs/img/octopus-cloud/images/private-endpoint-json-resource-guid.png) ::: With these details available, get in touch with [our support team](https://octopus.com/support) and ask that the Private Endpoint be approved. Once approved, you will be able to begin accessing your Octopus Cloud instance using your new Azure Private Link Endpoint. ## Additional information Configuring your Octopus Cloud instance to support Azure Private Links brings a higher degree of privacy and security to your networking. Activating this feature introduces the following considerations: ### Static IP address change Depending on your requirements for Azure Private Links, we may need to change the IP address range your Octopus Cloud instance uses. This has an additional benefit of moving your instance to an exclusive set of IP addresses, rather than sharing an IP range with other customers. ### Dynamic workers To avoid any possibility of unintentional access, Azure Private Links are not available on Dynamic Workers we provide to Octopus Cloud. ### Logged IP addresses When we configure your instance to allow access via Azure Private Links, client IP addresses displayed in internal logs will be replaced by the local IP addresses used by Azure's Private Link Service. This ensures the IP address shown in your audit logs accurately identifies the Private Link Service infrastructure making the connection. Other information logged such as username, date, time, and action taken continues to be recorded for audit and verification purposes. ### Kubernetes cluster upgrades As part of keeping your Octopus Cloud fully maintained, we upgrade the Kubernetes cluster your instance is hosted within approximately quarterly. To ensure minimal disruption during the Kubernetes cluster upgrade, for a few minutes, we will proxy the Private Link service traffic through a load balancer with an Azure public IP address. During this short period traffic does not leave Azure. ### Public access maintained Adding Azure Private Links makes it possible to privately and securely connect to your Octopus Cloud from your Azure virtual network without traversing the public internet. Access to your instance from the public internet is still permitted to ensure other use cases remain supported. ## Outbound Azure Private Links Outbound Azure Private Links provide private connectivity from your Octopus Cloud instance to resources in your virtual network. See [Outbound Azure Private Links](/docs/octopus-cloud/outbound-private-links) for details on how to configure this feature. # octopus deployment-target ssh create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-ssh-create.md Create a SSH deployment target in Octopus Deploy ```text Usage: octopus deployment-target ssh create [flags] Aliases: create, new Flags: --account string The name or ID of the SSH key pair or username/password account --environment strings Choose at least one environment for the deployment target. --fingerprint string The host fingerprint of the SSH target. --host string The hostname or IP address of the SSH target to connect to. --machine-policy string The machine policy for the deployment target to use, only required if not using the Default Machine Policy -n, --name string A short, memorable, unique name for this deployment target. --platform string The platform to use for the self-contained Calamari. Options are 'linux-x64', 'linux-arm64', 'linux-arm' or 'osx-x64' --port int The port to connect to the SSH target on. --proxy string Select whether to use a proxy to connect to this SSH target. If omitted, will connect directly. --role strings Choose at least one role that this deployment target will provide (use --tag for tag sets with validation). --runtime string The runtime to use to run Calamari on the SSH target. Options are 'self-contained' or 'mono' --tag strings Target tags in canonical format (TagSetName/TagName). --tenant strings Associate the deployment target with tenants --tenant-tag strings Associate the deployment target with tenant tags, should be in the format 'tag set name/tag name' --tenanted-mode string Choose the kind of deployments where this deployment target should be included. Default is 'untenanted' -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target ssh create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Google Cloud Storage feeds Source: https://octopus.com/docs/packaging-applications/package-repositories/gcs-feeds.md If you're deploying packages located in a Google Cloud Storage bucket, you can register them with Octopus and use them as part of your deployments. This lets you store your deployment packages in Google Cloud Storage and deploy them through Octopus. Go to **Deploy ➜ Manage ➜ External Feeds** to add a new feed. ## Adding a Google Cloud Storage feed To add a Google Cloud Storage feed: 1. Go to **Deploy ➜ Manage ➜ External Feeds**. 2. Click **Add feed**. 3. Select **Google Cloud Storage** as the feed type. 4. Give your feed a name. 5. Choose your authentication method: - **Service Account JSON Key**: Upload your Google Cloud service account JSON key file - **OpenID Connect**: Use OIDC authentication for short-lived credentials 6. Click **Save and test**. :::figure ![Google Cloud Storage feed configuration showing authentication options](/docs/img/packaging-applications/package-repositories/images/gcs-feed.png) ::: ## Authentication methods ### Service Account JSON Key To use service account authentication, you'll need to create a JSON key file for a Google Cloud service account that has permission to read from your storage buckets. 1. In the Google Cloud Console, go to **IAM & Admin ➜ Service Accounts**. 2. Create a new service account or select an existing one. 3. Grant the service account the **Storage Object Viewer** role (or a custom role with `storage.objects.get` and `storage.objects.list` permissions). 4. Create and download a JSON key for the service account. 5. In Octopus, upload this JSON key file when configuring your feed. ### OpenID Connect OpenID Connect authentication provides short-lived credentials that are more secure than long-lived service account keys. To set up OIDC authentication: 1. Follow the [Google Cloud documentation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) to create and configure a Workload Identity Federation. 2. Grant the Workload Identity Federation service account the **Storage Object Viewer** role on your storage buckets. 3. In Octopus, select **OpenID Connect** as your authentication method and configure: - **Subject**: See [OpenID Connect Subject Identifier](/docs/infrastructure/accounts/openid-connect#subject-keys) for how to customize the subject value - **Audience**: The audience value from your Workload Identity Federation (typically `https://iam.googleapis.com/projects/{project-id}/locations/global/workloadIdentityPools/{pool-id}/providers/{provider-id}`) ## Package naming The Google Cloud Storage feed searches for packages using the format `bucket-name/path/to/package`. For example, `my-deployment-bucket/releases/myapp` will search for the package `myapp` in the `my-deployment-bucket` bucket under the `releases` folder. The service account you provide must have access to the bucket. The Google Cloud Storage feed follows the same [package versioning conventions](/docs/packaging-applications/create-packages/versioning) as other feeds. Octopus supports these file formats: - `.zip` - `.tar.gz` - `.tar.bz2` - `.tgz` - `.tar.bz` ## Testing your feed On the test page, you can check whether the feed is working by searching for packages. Enter the bucket name and package name in the format `bucket-name/package-name`: :::figure ![Google Cloud Storage feed test page showing package search results](/docs/img/packaging-applications/package-repositories/images/gcs-feed-test.png) ::: ## Troubleshooting Google Cloud Storage feeds ### Access denied errors If you receive an "Access Denied" or permission error: - Check that your service account has the correct IAM permissions (at minimum `storage.objects.get` and `storage.objects.list`) - Verify the bucket exists and the name is spelled correctly - For OIDC authentication, ensure the Workload Identity Federation is configured correctly and the audience matches ### Bucket not found If Octopus can't find your bucket: - Verify you're using the correct bucket name in your package ID - Ensure the bucket is in the same project as your service account or that cross-project access is configured ### Package not found If Octopus can't find your package: - Check the package path is correct (format: `bucket-name/path/to/package`) - Verify the package file has one of the supported extensions - Ensure the package follows [Octopus versioning conventions](/docs/packaging-applications/create-packages/versioning) (e.g., `myapp.1.0.0.zip`) ## Performance considerations To reduce network latency, consider placing your Google Cloud Storage bucket in the same region as your Octopus Server. For deployments where Tentacles download packages directly (when `Octopus.Action.Package.DownloadOnTentacle` is set to `True`), consider placing the bucket close to your deployment targets. ## Learn more - [Package repositories](/docs/packaging-applications/package-repositories) - [Creating packages](/docs/packaging-applications/create-packages) - [Package versioning](/docs/packaging-applications/create-packages/versioning) - [OpenID Connect](/docs/infrastructure/accounts/openid-connect) - [Google Cloud Storage documentation](https://cloud.google.com/storage/docs) # Google Cloud account variables Source: https://octopus.com/docs/projects/variables/google-cloud-account-variables.md [Google Cloud accounts](/docs/infrastructure/accounts/google-cloud/) are included in a project through a project [variable](/docs/projects/variables/) of the type **Google Cloud Account**. Before you create a **Google Cloud account variable**, you need to [create a Google Cloud account](/docs/infrastructure/accounts/google-cloud) in Octopus: :::figure ![Google Cloud account variable](/docs/img/projects/variables/images/google-cloud-account-variable.png) ::: The **Add Variable** window is then displayed and lists all the Google Cloud accounts. Select the Google Cloud account you want to access from the project to assign it to the variable: :::figure ![Google Cloud account variable selection](/docs/img/projects/variables/images/google-cloud-account-variable-selection.png) ::: ## Google Cloud account variable properties The Google Cloud account variable also exposes the following properties that you can reference in a PowerShell script: | Name and description | | -------------------- | | **`JsonKey`**
      The JSON Key for the Google cloud account| ### Accessing the properties in a script Each of the above properties can be referenced in PowerShell. ```powershell # For an account with a variable name of 'google cloud account' # Using $OctopusParameters Write-Host 'GoogleCloudAccount.Id=' $OctopusParameters["google cloud account"] Write-Host 'GoogleCloudAccount.JsonKey=' $OctopusParameters["google cloud account.JsonKey"] # Directly as a variable Write-Host 'GoogleCloudAccount.Id=' #{google cloud account} Write-Host 'GoogleCloudAccount.JsonKey=' #{google cloud account.JsonKey} ``` ## Add a Google Cloud account to Octopus For instructions to set up a Google Cloud account in Octopus, see [Google Cloud Accounts](/docs/infrastructure/accounts/google-cloud). ## Older versions * Google Cloud accounts are available from Octopus Deploy **2021.3** onwards. ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) - How to use the [Run gcloud in a Script](/docs/deployments/google-cloud/run-gcloud-script) step - How to create [Google Cloud accounts](/docs/infrastructure/accounts/google-cloud) # Outbound Azure Private Links Source: https://octopus.com/docs/octopus-cloud/outbound-private-links.md Outbound Azure Private Links provide private connectivity from your Octopus Cloud instance to resources in your virtual network. They simplify network architecture and secure the connection between endpoints in Azure by eliminating data exposure to the public Internet. :::figure ![A diagram illustrating your Azure network connected to Octopus Cloud using Inbound Azure Private Link](/docs/img/octopus-cloud/images/outbound-private-link-network-diagram.png) ::: :::div{.hint} [Azure Private Link](https://azure.microsoft.com/en-us/products/private-link) is not a service provided by Octopus Deploy. It is a Microsoft service that Octopus Deploy enables for use with your Octopus Cloud instance. Customers maintain configuration within their own network in order to use Azure Private Links. Octopus Deploy is not responsible for customer configuration. For issues with configuration, please contact Microsoft Support. ::: ## How to access this feature Outbound Azure Private Links are currently in Preview, available to a select group of customers. If you would like to access this feature, please reach out to [our support team](https://octopus.com/support) so we can discuss how best to meet your private networking requirements. We are working through a waitlist and will be in touch when we are ready to onboard you. ## Prerequisites Before connecting your Octopus Cloud instance, you will need an Azure Private Link Service configured in your network. The setup of a Private Link Service involves a number of decisions specific to your network architecture, so we recommend following [Microsoft's documentation](https://learn.microsoft.com/en-us/azure/private-link/create-private-link-service-portal) to create one. Your Private Link Service must be configured with access security to allow connections from anyone with the alias, so that your Octopus Cloud instance can connect to it. Once your Private Link Service is ready, you will need its **alias** to complete the steps below. :::figure ![A screenshot of a Private Link Service in the Azure Portal showing where the alias is](/docs/img/octopus-cloud/images/private-link-service-alias.png) ::: ## Connect your Octopus Cloud instance to your Private Link Service Connecting your Octopus Cloud instance to your Private Link Service can be done in [Control Center](https://billing.octopus.com/). Users with `Cloud Subscription Owner` role can administer the feature from the **Configuration** menu. 1. Click the **Configure** link in the Outbound Azure private links section 2. Copy the Alias for your Private Link Service from Azure Portal and paste into the displayed field. Click **Submit**. :::figure ![A screenshot of the Control Center outbound private links configuration UI showing the alias input field](/docs/img/octopus-cloud/images/outbound-private-links-control-center-alias.png) ::: 3. A request message will be displayed. This will be used to verify the incoming private endpoint request to your network. This value will remain visible in Control Center after the dialog is dismissed. :::figure ![A screenshot of the Control Center outbound private links configuration UI showing the request message after submitting an alias](/docs/img/octopus-cloud/images/outbound-private-links-control-center-request-message.png) ::: ## Approve the incoming Private Endpoint request Once Octopus has initiated a connection to your Private Link Service, you will need to approve the incoming Private Endpoint request before traffic can flow. The incoming request will have a connection state of **Pending**. You can find it in either of these locations in the Azure Portal: - Navigate to your **Private Link Service** and select **Private endpoint connections** from the left-hand menu. - Navigate to **Private Link Center** and select **Pending connections**. Before approving, verify that the **Description** on the incoming connection matches the **request message** displayed in Control Center. This confirms the request originated from your Octopus Cloud instance. :::figure ![A screenshot of the Private endpoint connections tab on a Private Link Service in Azure Portal, showing a pending connection with the Request message column visible](/docs/img/octopus-cloud/images/outbound-private-links-pending-connection.png) ::: Once verified, select the connection and click **Approve**. ## Configure DNS DNS entries tell your Octopus Cloud instance which hostnames should be routed through the private link, rather than over the public internet. You should add an entry for each hostname you want to access privately through your Private Link Service. This is configured in [Control Center](https://billing.octopus.com/) from the same **Outbound Azure private links** section used in the previous step. 1. Click **Add new DNS entry**. 2. Enter the **Subdomain** and **Root Domain** of the hostname you want to route privately. Click **Add**. :::figure ![A screenshot of the Add DNS Entry dialog in Control Center showing the subdomain and root domain fields](/docs/img/octopus-cloud/images/outbound-private-links-add-dns-entry.png) ::: 3. Repeat for any additional hostnames. Note that all entries must share the same root domain. Changes to DNS entries can take up to 5 minutes to take effect. Once your DNS entries are saved, your Octopus Cloud instance will route requests to those hostnames through your Private Link Service. ## Additional information Configuring your Octopus Cloud instance to support Azure Private Links brings a higher degree of privacy and security to your networking. Activating this feature introduces the following considerations: ### Static IP address change Depending on your requirements for Azure Private Links, we may need to change the IP address range your Octopus Cloud instance uses. This has an additional benefit of moving your instance to an exclusive set of IP addresses, rather than sharing an IP range with other customers. ### Dynamic workers To avoid any possibility of unintentional access, Azure Private Links are not available on Dynamic Workers we provide to Octopus Cloud. # octopus deployment-target ssh list Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-ssh-list.md List SSH deployment targets in Octopus Deploy ```text Usage: octopus deployment-target ssh list [flags] Aliases: list, ls Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target ssh list octopus deployment-target ssh ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus deployment-target ssh view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-ssh-view.md View a SSH deployment target in Octopus Deploy ```text Usage: octopus deployment-target ssh view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target ssh view 'linux-web-server' octopus deployment-target ssh view Machines-100 ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # IP address allow list Source: https://octopus.com/docs/octopus-cloud/ip-address-allow-list.md Customers may restrict the IP addresses that can initiate traffic with their Octopus Cloud. IP address allow listing provides you with an effective tool to enforce internal access policies and add another layer of protection against some forms of cyber attack. When activated, only traffic from the IPv4 address ranges you configure, or from sources required by Octopus Deploy, will be allowed to connect to your Octopus Cloud instance. ## Configuration IP address allow list is configured in [Control Center](https://billing.octopus.com/). Users with `Cloud Subscription Owner` role can administer the feature from the **Configuration** menu. :::div{.hint} Changes to IP address allow list content or activation status can take up to 60 seconds to apply. ::: ### Activation To enforce traffic restrictions, your allow list must be activated. You can activate your IP address allow list by clicking the **Activate** link. IP address allow listing can only be activated when at least one IP address or range is listed. ### Deactivation You can deactivate IP address allow listing by clicking the **Deactivate** link. Deactivating the feature will not modify your IP address allow list content. ### Adding an IP address or range You can add an IPv4 address or range by clicking **Add a new IP address or range**. This will show a modal dialog which accepts a mandatory IP address or range in CIDR format, and an optional description. If the IP address or range provided already appears on your allow list, the description will be updated to this latest value, or removed if no description is provided. ### Updating or deleting IP addresses or ranges When an IP address or range has been added to the allow list, it can be updated or deleted by clicking the **Edit** or **Delete** links on the relevant row. ### CSV import You can import a CSV file of IP addresses or ranges, with optional descriptions, by selecting **Import a CSV file**. The CSV file must have a header row with two fields in this order, named: **ip_address** and **description**. If any IP address or range provided in the CSV file already appears on your allow list, the description will be updated to the value specified in the file, or removed if no description is provided. ## Dynamic workers Dynamic workers leased by your Octopus Cloud are not protected by your IP address allow list. These dynamic workers do not bypass your IP address allow list, either. We do not support adding your leased dynamic workers to the allow list. If you need to access your Octopus Cloud instance from a dynamic worker, consider provisioning your own [external worker](/docs/infrastructure/workers#external-workers-external-workers). If you're using Octopus Cloud and would like to combine an IP address allow list with a dynamic worker that initiates communication with your instance, please let us know by leaving a comment on our public [roadmap card](https://roadmap.octopus.com/c/189-higher-resourced-more-secure-dynamic-workers-for-octopus-cloud). ## Azure Private Links Customers with Azure Private Link access to their Octopus Cloud can have IP address allow list enabled with zero public IP addresses allowed by contacting [our support team](mailto:support@octopus.com). The combination of Azure Private Links and IP address allow list allows customers to achieve the highest standard of privacy available for Octopus Cloud. ## Exclusions When activated, the IP addresses or ranges specified on your allow list retain access to your Octopus Cloud. In addition, access is retained for the IPs and services that: - Octopus Cloud requires for successful function - Octopus Deploy requires to perform our maintenance - Our Support staff use for access to your instance when needed These API endpoints retain public access in order to correctly function: - `/.well-known` - `/api/serverstatus/health` - `/api/serverstatus/hosted/external` - `/token/v1` Polling tentacle access is not restricted by an activated IP address allow list. ## Troubleshooting If you suspect an activated IP address allow list is causing access issues, consider deactivating the feature, waiting 60 seconds, then testing if the access issue is now resolved. If the issue persists beyond 60 seconds, it is likely unrelated to IP address allow list. If the issue is resolved when your allow list is deactivated, consider if additional IP addresses are required on your allow list. If this approach has not resolved the issue, please contact [our support team](https://octopus.com/support) for further assistance. # octopus deployment-target view Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-deployment-target-view.md View a deployment target in Octopus Deploy ```text Usage: octopus deployment-target view { | } [flags] Flags: -w, --web Open in web browser Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus deployment-target view Machines-100 octopus deployment-target view 'web-server' ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # octopus environment Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-environment.md Manage environments in Octopus Deploy ```text Usage: octopus environment [command] Available Commands: create Create an environment delete Delete an environment help Help about any command list List environments tag Override tags for an environment Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations Use "octopus environment [command] --help" for more information about a command. ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus environment list octopus environment ls ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Run multiple processes on a target simultaneously Source: https://octopus.com/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously.md By default, Octopus will only run one process on each [deployment target](/docs/infrastructure/deployment-targets) at a time, queuing the rest. There may be reasons that you need to run multiple, and that's okay we have a setting for that! :::figure ![](/docs/img/administration/managing-infrastructure/images/bypass-deployment-mutex.png) ::: `OctopusBypassDeploymentMutex` must be set at the project variable stage. It will allow for multiple processes to run at once on the target. Having said that, *deployments of the same project to the same environment (and, if applicable, the same tenant)* are not able to be run in parallel even when using this variable. :::div{.hint} **Scoping** `OctopusBypassDeploymentMutex`: Just like any other Octopus variable, it's possible to scope the `OctopusBypassDeploymentMutex` variable. That might be a specific Environment or target role. This can be useful in certain scenarios, for example where you don't want to run deployments in parallel in lower environments. ::: ## Multiple projects If you require multiple steps to run on a target, by multiple Projects in parallel, you need to add the `OctopusBypassDeploymentMutex` variable to **ALL** of your projects. :::div{.problem} **Caution** When this variable is enabled, Octopus will be able to run multiple deployments simultaneously on the same machine. This can cause deployments to fail if the same file is modified more than once at the same time. If you use `OctopusBypassDeploymentMutex`, make sure that your projects will not conflict with each other on the same machine. ::: ## Max Parallelism When enabling `OctopusBypassDeploymentMutex` there are a couple of special variables that may impact the number of parallel tasks that are run. - `Octopus.Acquire.MaxParallelism`: - This variable limits the maximum number of packages that can be concurrently deployed to multiple targets. - By default, this is set to `10`. - `Octopus.Action.MaxParallelism`: - This variable limits the maximum number of machines on which the action will concurrently execute. - By default, this is set to `10`. - **Note:** Some built-in steps have their own concurrent limit and will ignore this value if set. For example the [health-check step](/docs/projects/built-in-step-templates/health-check). Given five projects with the **OctopusBypassDeploymentMutex** set as follows: - Project 1: `True` - Project 2: `True` - Project 3: `False` - Project 4: `True` - Project 5: `True` Assuming the deployments for these projects are started in that order, the first two will run in parallel, but the third will wait until they have finished. The last two will then also be blocked until *project three completes* at which point they both will run in parallel. ## Named mutex for shared resources If you need even more finely grained control to a shared resource, we recommend using a [named mutex](https://docs.microsoft.com/en-us/dotnet/api/system.threading.mutex?view=net-5.0) around the process. To learn more about how you can create a named mutex around a process using PowerShell, see this [log file example](https://learn-powershell.net/2014/09/30/using-mutexes-to-write-data-to-the-same-logfile-across-processes-with-powershell/). :::div{.success} You can see how Octopus uses this technique with the built-in IIS step in the [open-source Calamari library](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari/Scripts/Octopus.Features.IISWebSite_BeforePostDeploy.ps1#L144). ::: # Retention policies Source: https://octopus.com/docs/administration/retention-policies.md As you deploy more often and to different environments, files and releases can build up. Octopus handles this using retention policies. They allow you to control how Octopus decides which releases, packages and files are kept. ## What is deleted \{#what-is-deleted} There are a number of different types of retention policies that run. Those on the Octopus Server, those on the Tentacle, and those in the built-in package repository. ### Releases \{#releases-whats-deleted} The Octopus Server settings delete **releases** from the database. This is a data deletion. It also cleans up any **artifacts**, **deployments, tasks**, and **logs** attached to the release. No packages from the internal package repository will be deleted as part of this policy, but they may be deleted by a corresponding repository retention policy. The retention policy for releases can be [configured in the lifecycle page.](/docs/administration/retention-policies/retention-policies-lifecycle) #### Releases included on a dashboard One important thing to note about the release retention policy is that any releases displayed on either the main dashboard or a [project dashboard](/docs/projects/project-dashboard) are **never deleted**. This is true even if it matches a retention policy rule. These releases are assumed to be a working release and may still be promoted (even if their dates fall well outside the retention policy). This can be helpful as it means you don't have to worry about a recent release in the Staging environment being deleted before it can be promoted to Production. :::div{.hint} If you see a release that isn't being cleaned up, check the dashboards to see if it's being displayed. ::: #### Rollbacks Octopus will never remove the latest release or the release previous to the latest in any lifecycle phase. This allows you to deploy the previous release in case you need to rollback for any reason. Learn more about how [retention policies work with lifecycle phases](/docs/administration/retention-policies/retention-policies-lifecycle#retention-policies-and-lifecycle-phases). ### Tentacle files \{#targets-whats-deleted} :::div{.hint} We talk about Tentacles here, but the same process and logic applies to [SSH Targets](/docs/infrastructure/deployment-targets/linux/ssh-target) also. ::: #### Deployed files Retention policies are applied to target Tentacle machines via the Retention Policy set in the Lifecycle. They clean up the files (and folders) that are deployed, e.g., the contents of a package extracted to a folder. They are run as the last step in the deployment. Retention Policies are applied to Environment/Project/Tenant/Step/Machine combinations. For example, the last three releases to the machine for the given step of a project deployed to environment/tenant will be kept. Workers also clean up any files and folders older than 90 days. Note that if you use the [Custom Installation Directory](/docs/projects/steps/configuration-features/custom-installation-directory) feature, we will never delete from that directory during retention policies as it's assumed this directory has a working release in it. This can be purged during deployment in the project step settings. #### Packages Packages that are transferred during the deployment are managed based on quantities of packages and version to keep. By default, Octopus keeps all packages, with a maximum of 2 versions of each. These options can be configured for each machine under the Machine Policy. To configure quantity to keep, create a custom [Machine Policy](/docs/infrastructure/deployment-targets/machine-policies) and set the `Package Cache` policy to `Keep a limited number`. This will allow you to specify a number of versions to keep per package in the cache. By default, this number of versions will be kept for all packages in the cache. To restrict the number of packages to keep, select `From the most recently used packages`, and enter your preferred number of packages to keep. Octopus will ensure that only the least recently used packages and versions are removed. :::figure ![Machine policy settings for package cache retention](/docs/img/infrastructure/deployment-targets/machine-policies/machine-policies-package-cache-retention.png) ::: ### Runbook runs \{#runbook-runs-whats-deleted} Octopus saves a new entry into the database whenever a runbook is run. Each runbook run has **logs**, **artifacts** and a snapshot of the **variables**, which will contribute to database usage on your instance. Additionally, runbook artifacts can contribute to disk usage for your instance. When expired runbook runs are cleaned up according to the retention policy, the attached logs, artifacts, and snapshots of the variables are also cleaned up. The retention policy for runbook runs can be configured in your runbook settings. #### Latest runbook runs It is worth noting that the latest run for a tenant + environment combination for each runbook is **never deleted**. This is true even if the runbook run matches a retention policy rule. These runbook runs are assumed to reflect the current condition of the environment/tenant that you are running against, therefore you can always observe the current state of your environments, regardless of when the last run was. ### Built-in repository \{#built-in-repo-whats-deleted} A retention policy can be applied to packages in the [built-in Octopus package repository](/docs/packaging-applications/package-repositories/built-in-repository). By default, the policy is set to keep all packages indefinitely. This policy is _separate_ from the [release retention policy](#releases-whats-deleted) described above and can be [configured in the built-in repository page.](/docs/administration/retention-policies/retention-policies-built-in-feed) :::figure ![](/docs/img/administration/retention-policies/images/built-in-repository.png) ::: :::div{.hint} When configuring the repository retention policy, it's also worth making note of your [release retention policy](#releases-whats-deleted) settings too. When releases are deleted as a result of your release retention policy, then packages associated with those releases may become subject to cleanup by your repository policy. ::: ### Build information \{#build-information-whats-deleted} [Build information](/docs/packaging-applications/build-servers/build-information) stored in Octopus is associated with **packages**. Octopus will decide how long to keep the build information based on the package they are linked to: - If the package is used by a release, it will be kept. - If the package is present in the built-in repository, and a package retention policy has been configured, then the record will be kept according to that value. If no package retention policy has been configured, then the build information record will be kept indefinitely. - If the package is not present in the built-in repository, it's assumed that the package belongs to an [external package repository](/docs/packaging-applications/package-repositories). The build information record will be kept for a fixed value of 100 days from when it was published to Octopus. ### Tasks \{#tasks-whats-deleted} [Tasks](/docs/tasks) stored in Octopus are kept based on their time of completion. Octopus will retain: - All incomplete tasks. - The 20 most recently completed task per type. - The 20,000 most recently completed task per type within the last 30 days. ## What isn't deleted \{#what-is-not-deleted} Some items in Octopus are not affected by retention policies, and are never deleted. One example of this is [audit logs](/docs/security/users-and-teams/auditing). Octopus actively [prevents modifying or deleting audit logs](/docs/security/users-and-teams/auditing/#modifying-and-deleting-audit-logs-is-prevented). ## When the retention policies are applied \{#when-retention-policies-applied} Both release and built-in repository retention policies are run under a scheduled task from the Octopus Server every 4 hours. This task does not apply retention policies to Tentacles. Tentacle retention policies are run **during a deployment**, specifically **after all package acquisition steps have completed**. So if you have a retention policy of 3 days and do not deploy to a Tentacle for 5 days, the files that are over 3 days old will not be deleted until after a deployment is run to that Tentacle. It will also only delete any packages or files that are associated with the **current project** being deployed. If it's a development server, and you have multiple projects deploying there, only the active deployed project files will be deleted. It does not have any information about other project's retention policies tagged with the deployment. ## External feeds Octopus does not apply any retention policies to external feeds. However the packages that are currently in-use can be retrieved from the API ([example](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/Octopus.Client/LINQPad/GetInUsePackages.linq)) and those results then used to remove packages from those feeds. ## Recommendations Whether you have an existing Octopus Server or are setting it up for the first time, we have some recommendations when setting retention policies. ### Change the defaults Octopus comes with default retention policies to keep everything, forever. If you have small packages or don't release frequently, you may never notice any adverse effects. As your usage grows, you might run into disk space or performance issues as Octopus continues to store everything. We recommend changing the default values on the different retention policies available in Octopus. For releases, you have the choice to clean up after a specified number of releases or a specified number of days. If you're not sure what value to pick, we recommend keeping the last three releases for both releases and the extracted packages. Remember, if you have multiple lifecycles then we recommend configuring the retention policies on each lifecycle and any defined phases. :::div{.info} From Octopus version 2025.4, the default lifecycle retention policy for a space can be customized. ::: For the built-in repository, even if you don't plan to use it, it's good to update the retention policy so that it's set if you start using the repository in the future. Normally we recommend a short length of time, for instance, something close to 7 days. ### Start with larger policies If you have a large number of existing releases, we recommend starting with a large retention policy and adjusting it down to what you need. For example, if you have 12 months worth of releases now, perhaps set the retention policy to keep 11 months worth of releases. Octopus will apply these retention policies periodically. After it has cleaned up the oldest releases, you can change the policy to keep ten months of releases, and so on. You can also apply this method with the number of releases instead of the time-based setting. ## Older versions From 2023.1, the [audit retention functionality](/docs/security/users-and-teams/auditing/#archived-audit-events) has been rolled out. This **does not** delete audit records. It just moves them from the database to the file system. # Step Templates and Script Modules Source: https://octopus.com/docs/best-practices/deployments/step-templates-and-script-modules.md [Step Templates](/docs/projects/custom-step-templates/) and [Script Modules](/docs/deployments/custom-scripts/script-modules) allow you to extend the functionality of Octopus Deploy. While they appear similar, they are designed to meet different goals. - Step Templates are re-usable steps you can inject into your deployment or runbook process to perform a specific task. Examples include stopping IIS, deploying database migration scripts using a third-party tool such as Flyway, notifying VictorOps of a completed deployment, and more. - Script Modules are re-usable functions you can inject into scripts run by your deployment or runbook process. Examples include a function to call the Octopus API, functions to write output to a centralized log, or a function to find an item in a list by name. :::div{.hint} Step templates can use script module functions, but script module functions cannot use step templates. ::: ## Step Template or Script Module Our recommendation is to create a step template when you want to create a re-usable unit of work. For example, you want everyone to follow the same standards for deploying to NGINX. A step template allows you to inherit from the deploy to NGINX built-in step and add your custom rules on top of it. Our recommendation is to create a script module when you need to share utility functions with your scripts in your project. Script modules are injected into every script in every step of your deployment or runbook process. ## Structure Our recommendation is to write script modules and step templates to be self-contained with no dependencies. While they have full access to all projects, tenants, referenced variable sets, and system variables, you don't know which project the step template or script module will be used. Use parameters instead of directly referencing any project, variable set, or system variables. If you are writing custom scripts, passing in parameters will allow you to copy those scripts to your IDE of choice, such as VS Code, and debug your scripts with few modifications. ## Logging Our recommendation is you can never have enough logging. Logging informs your users of the location of the script module or step template they are. It also helps debug if something isn't working as it should. [Octopus Deploy](/docs/deployments/custom-scripts/logging-messages-in-scripts) supplies built-in logging utilities you can leverage in your scripts. Using the built-in logging utilities is one of the few areas where it is okay to directly reference these functions instead of passing them in as parameters. We also recommend leveraging the different logging levels as Octopus treats each one differently. - Verbose: Automatically hidden by default, useful for low-level logging messages you think will only be useful to other developers. - Information: Is shown in the task log by default. Useful for logging status messages to the user. - Warning: Messages are highlighted in yellow in the task log. Helpful if something isn't quite right, but the script was able to recover. - Error: Messages are highlighted in red in the task log and task summary. This is for what it says on the tin, error messages. - Highlight: Messages are highlighted in blue in the task log and task summary. Use these for important messages you want to let the user know about. Octopus provides [manual interventions](/docs/projects/built-in-step-templates/manual-intervention-and-approvals/) which pause the deployment and allow people to review the progress made so far. Putting information needed for approvals in logs can make it difficult for the approvers to find. If there is information needed for approvals, such as test results or database delta scripts, the recommendation is to create an [artifact](/docs/projects/deployment-process/artifacts). ## Further reading For further reading on step templates and script modules in Octopus Deploy please see: - [Step Templates](/docs/projects/custom-step-templates) - [Built-in Step Templates](/docs/projects/built-in-step-templates) - [Community Step Templates](/docs/projects/community-step-templates) - [Script Modules](/docs/deployments/custom-scripts/script-modules) # Export a certificate to a Java KeyStore Source: https://octopus.com/docs/deployments/certificates/java-keystore-export.md The `Deploy a KeyStore to the filesystem` step can be used to take a certificate managed by Octopus and save it as a Java KeyStore on the target machine. The `Select certificate variable` field is used to define the variable that references the certificate to be deployed. The location of the new KeyStore file must be defined in the `KeyStore filename` field. This must be an absolute path, and any existing file at that location will be overwritten. The `Private key password` field defines a custom password for the new KeyStore file. If this field is left blank, the KeyStore will be configured with the default password of `changeit`. The `KeyStore alias` field defines a custom alias under which the certificate and private key are stored. If left blank, the default alias of `Octopus` will be used. # Output variables Source: https://octopus.com/docs/deployments/custom-scripts/output-variables.md Your scripts can emit variables that are available in subsequent deployment steps. This means you can factor your deployment into smaller, more well-defined steps that leverage the result of prior steps. It is an extremely powerful feature and you should refer to the documentation on [output variables](/docs/projects/variables/output-variables) for more information. ## Creating an output variable
      PowerShell ```powershell Set-OctopusVariable -name "AppInstanceName" -value "MyAppInstance" ```
      C# ```csharp SetVariable("AppInstanceName", "MyAppInstance"); ```
      Bash ```bash set_octopusvariable "AppInstanceName" "MyAppInstance" ```
      F# ```fsharp Octopus.setVariable "AppInstanceName" "MyAppInstance" ```
      Python3 ```python set_octopusvariable("AppInstanceName", "MyAppInstance") ```
      ## Using the variable in another step
      PowerShell ```powershell $appInstanceName = $OctopusParameters["Octopus.Action[Determine App Instance Name].Output.AppInstanceName"] ```
      C# ```csharp var appInstanceName = OctopusParameters["Octopus.Action[Determine App Instance Name].Output.AppInstanceName"] ```
      Bash ```bash appInstanceName=$(get_octopusvariable "Octopus.Action[Determine App Instance Name].Output.AppInstanceName") ```
      F# ```fsharp //throw if not found let appInstanceName1 = Octopus.findVariable "Octopus.Action[Determine App Instance Name].Output.AppInstanceName" //supply a default value to use if not found let appInstanceName2 = Octopus.findVariableOrDefault "Value if not found" "Octopus.Action[Determine App Instance Name].Output.AppInstanceName" //return an Option type let appInstanceName3 = Octopus.tryFindVariable "Octopus.Action[Determine App Instance Name].Output.AppInstanceName" ```
      Python3 ```python Python3 appInstanceName = get_octopusvariable("Octopus.Action[Determine App Instance Name].Output.AppInstanceName") ```
      ## Service message The following service message can be written directly (substituting the properties with the relevant values) to standard output which will be parsed by the server and the values processed as an output variable. Note that the properties must be supplied as a base64 encoded UTF-8 string. ``` ##octopus[setVariable name='' value=''] ``` # Raw scripting in Octopus Source: https://octopus.com/docs/deployments/custom-scripts/raw-scripting.md ## Design intentions {#design-intentions} Some Octopus users deploying to SSH Endpoints have had problems installing the Mono prerequisite that provides the runtime for Octopus Deploy's .NET orchestration tool [Calamari](/docs/octopus-rest-api/calamari). Although there is some momentum to package Calamari in a self-contained, cross-platform way with .NET Core, there exists a need now to be able to execute scripts directly on the server without all the added cost and complexity of uploading the latest Calamari. :::div{.hint} **Feature Tradeoffs** In order to provide the ability to perform raw scripting and just execute exactly what the step requires on the remote target, the script execution through Calamari is bypassed. This results in some behavioral differences as compared with the normal scripting in Octopus that you would be accustomed to. The script that is provided by the user is executed "as-is" through the open SSH connection so the actual shell will depend on what you have configured for that account and it may not actually be bash. Keep this in mind when expecting certain commands to be available. The bootstrapping script that is provided by Calamari will not be available and so you will lose the ability to use helper functions like [new\_octopusartifact](/docs/projects/deployment-process/artifacts/), [set\_octopusvariable](/docs/projects/variables/output-variables/) or [get\_octopusvariable](/docs/deployments/custom-scripts). You can still use the standard **#{MyVariable}** variable substitution syntax however since this is replaced on the server, environment variables from your target will not be available through Octopus variables. While still available as an option in the UI, raw scripts cannot currently be sourced from inside a package unless manually extracted & executed in conjunction with a `Transfer a Package` step. ::: Raw scripting is great for use cases where you are unable to install and run Mono for example your server platform is unsupported by Mono or deploying to an IOT device that does not meet the hardware requirements to run Mono. By eliminating Calamari as the middle man in these deployments, you may also shave a few seconds off your deployment for each step. ## Health checks The default health check for Linux targets depends on bash being available, and confirms that dependencies are available. In the intended scenarios for raw scripting, these constraints may not be true. To opt out of these checks, create a custom [Machine Policy](/docs/infrastructure/deployment-targets/machine-policies) and set the `Health Check Type` to `Only perform connection test`. Targets configured with this policy will be considered healthy so long as an SSH connection can be established. :::figure ![Machine policy settings for connection test only](/docs/img/deployments/custom-scripts/images/machine-policy-connection-test-only.png) ::: ## Deploying To SSH endpoint without Calamari (i.e., no Mono prerequisite) {#deploy-to-ssh-without-calamari} While raw scripting does not require a Transfer a Package step, the below scenario walks though a basic scenario of using a raw script in conjunction with the Transfer a Package step to extract a package on an SSH endpoint where Mono is unable to be installed. 1. Add a [Transfer A Package](/docs/deployments/packages/transfer-package) step. 2. In the **Transfer Path** field enter the location the package will be moved to as part of the deployment, for instance, `~/temp/uploads`. Note that this directory will be created if it does not already exist. Give the step the name *Transfer AcmeWeb* and include the relevant target tag associated with your SSH target. 3. Add a [Run A Script](/docs/deployments/custom-scripts/run-a-script-step) step and explicitly clear and extract the package to your desired location. In the below example we know that the target shell will be bash so we can use output values from the previous *Transfer AcmeWeb* step to locate the package and extract it to a directory at *~/temp/somewhere*. Note that although we have selected the *Bash* script type for this step, this is purely for helpful syntax highlighting since whatever script is provided will be executed through the open connection regardless of selected type. ```bash rm -fr ~/temp/somewhere unzip -d ~/temp/somewhere "#{Octopus.Action[Transfer AcmeWeb].Output.Package.FilePath}" ``` 4. On the Variables tab set the variable `OctopusUseRawScript` to the value `True` which instructs Octopus to perform package transfers and script execution without the aid of Calamari. This means that package transfer will not be able to use [delta compression](/docs/deployments/packages/delta-compression-for-package-transfers/) during the package acquisition phase and it will actually be _moved_ from the upload location when the transfer step runs. This is because no target-side logs are kept for this transfer and hence [retention policy](/docs/administration/retention-policies) will be unable to clean old packages. 5. Create a release and deploy the project. You should notice that unlike a typical deployment, there are no calls to upload or run Calamari and the whole thing runs a bit faster due to the reduced overhead. If you check your *~/.octopus* directory on the remote endpoint, you should also notice that there are no Calamari dependencies that have had to be uploaded for this deployment. ## Raw Tentacles {#raw-tentacles} Raw scripting is also supported on standard Windows based Tentacles however in this case the scripts will always be executed in the context of a PowerShell session. Keep in mind that this still means that you need a fully functioning Tentacle actually running on the remote target. ## Older versions Raw scripting was added as an experimental feature in **Octopus 3.9**, accessible via a project variable which will simply open a connection to the remote server and execute a deployment script within that session. Older versions of Octopus do not include this feature. # Deployment process as code Source: https://octopus.com/docs/deployments/patterns/deployment-process-as-code.md :::div{.hint} **Looking for Configuration as Code?** This section looks at storing your deployment process as code **without** using the [Configuration as Code](/docs/projects/version-control) feature. ::: With Octopus you can manage your deployment process as code. This means you can define your deployment process, scripts, and variables in source code. You can store this configuration in the same source control repository as your application source code, or somewhere else. This page describes the different options available in Octopus to store your deployment process as code. We recommend taking a two-phase approach when moving to **deployment process as code**: 1. Start with [scripts as code](#scripts-as-code): this method offers the best cost/reward ratio, and it is the simplest to implement and maintain over time. 2. Move towards [project as code](#project-as-code): this method is more difficult to set up but comes with the benefit of having your entire Octopus project managed as code. At Octopus we use `git` and the rest of this pattern provides examples using git concepts. The same principles apply regardless of your source control tool of choice. ## Scripts as code {#scripts-as-code} The simplest way to get started with **deployment process as code** is to manage your custom deployment **scripts as code**. When you deploy your application, Octopus can execute a script contained inside a package. You can colocate your deployment scripts with your application source code, leveraging all the benefits of source control including change tracking and branching, then package it all up for Octopus. There is a downside to stopping here: your scripts are managed as code, but your deployment process and variables are still controlled by Octopus. Depending on your situation, this trade off might be quite acceptable. ### Move your scripts into source code You can follow this process to move your custom scripts without interrupting deployments: 1. Move your deployment scripts from your Octopus deployment process into a file in your application source code. 1. Build your application as normal, but this time with the deployment scripts packaged up alongside your application. a. At this point your deployments will continue to work using the scripts stored in Octopus. 1. Update your deployment process to use the script from your application package. Now your scripts are colocated with your application source code, all without changing your build pipeline. Learn about [custom scripts](/docs/deployments/custom-scripts/) and [executing custom scripts in packages](/docs/deployments/custom-scripts/scripts-in-packages). ### Consistency and repeatability using scripts as code When you manage your **scripts as code**, Octopus still makes sure your deployments are consistent and repeatable. Whenever you modify a script, that change flows through the whole process just like the changes to your application code. ### Introducing changes safely using scripts as code {#scripts-as-code-change-safely} Branches in source control let you test application code changes in isolation before integrating them with other code changes. You can use the exact same approach to introduce changes to your scripts as code. This lets you make changes to your scripts without breaking deployments from your `main` (default) branch. - **Modifying an existing script**: If you modify a script in a branch, that change flows through the whole process just like the changes to your application code. When you deploy a release from that branch, your modified script will be used. When you merge your branch into the `main` branch, your modified script will be used for deployments from the `main` branch. - **Adding a new script**: Add an empty script to your `main` branch, configure Octopus to execute the script, and then author the script content on your branch. This enables you to test your new script in isolation. Merge into `main` when you are ready to integrate. - **Deleting a script**: Delete the content of your script in your branch, leaving the empty script file to be packaged. Now you can test your deployment still works with the empty script. When you are ready, you can configure Octopus to stop calling the script, and delete the script from your `main` branch. ## Project as code {#project-as-code} Another approach to **deployment process as code** is to define the configuration of your Octopus **project as code**, primarily the deployment process and variables. You can colocate your Octopus project configuration with your application source code, adding a step to your build process which pushes the configuration changes to your Octopus project. This approach is more complex since you need to change your build pipeline, and train people how to treat the code as the source of truth for these parts of your project. ### Move your project configuration to source code Follow this process to move your Octopus project configuration to code: 1. Move your [deployment scripts to code](#scripts-as-code) and decide if you want to take this next step. 1. Convert your project configuration to code: learn about [configuring Octopus using code](#configure-octopus-using-code). 1. Test your project configuration as code, targeting a dummy project so your real project continues to work uninterrupted. 1. When you are happy with your testing, update your build pipeline to target the real project. 1. Train your people to make changes safely using **project as code**. The process flow of using **project as code** looks similar to what you already have, with on additional step to push the configuration changes into Octopus. Here is a typical example: 1. Build and test application source code. 1. Package the application source code. 1. **Push configuration changes to an Octopus project treating the code as a source of truth**. 1. Push application packages to deployment feed. 1. Create a release in Octopus. 1. Deploy the release via Octopus to your Dev or Test environment. 1. Promote the release to other environments. ### Configure Octopus using code {#configure-octopus-using-code} Octopus has a comprehensive HTTP API and .NET SDK you can use to automate **everything** in Octopus. If you can do something through the user interface, you can automate it with code. You can create and update projects, variables, deployment processes, and more. A downside of this approach is how much work is involved: you need to write code that detects drift and applies deltas, or is idempotent. Today, this is our only fully-supported solution to define your Octopus configuration as code. There is an [open source Terraform provider for Octopus](https://github.com/OctopusDeploy/terraform-provider-octopusdeploy), which is built on top of the Octopus HTTP API. The Terraform provider for Octopus detects drift and applies deltas. We are using this Terraform provider ourselves for [Octopus Cloud](https://octopus.com/cloud) (our SaaS product), and we are actively contributing to the provider. It doesn't cover 100% of all Octopus features yet, and the structure of the Terraform resources are subject to change. We will be building first-class support for this into the Octopus ecosystem in the future. If you want to do **Octopus configuration as code** today, we recommend using our .NET SDK which will always be supported. The Terraform provider will be a simpler, more declarative approach, that we will support in the future. ### Consistency and repeatability using project as code When managing your Octopus configuration as code, Octopus still makes sure your deployments are consistent and repeatable. Whenever you push a configuration change to your Octopus project via code, it's just like people using the user interface or API to make changes. When you create a release, Octopus takes a snapshot of the deployment process, variables, and packages making every deployment of that release consistent and repeatable, regardless of whether the project was configured by a person or by code. This means you can push configuration changes to Octopus as code, and get the same consistent and repeatable experience you expect for your deployments. ### Introducing changes safely using project as code This is where things get a bit more difficult compared to the [scripts as code](#scripts-as-code) pattern. You can approach this problem in several ways, where one approach will suit your scenario better than others: 1. **Space-per-branch**: In this approach you push your configuration changes from each branch in your source control repository into a unique space in Octopus. A space is a sandbox containing all the things you need for your application or set of applications. By using a space-per-branch you dynamically create little "parallel universes" in the same Octopus Server, safely isolated from each other, then tear them down when you are done. This approach is suitable in most situations since it offers the best isolation and most flexibility. It is especially appropriate for service-oriented architectures and microservice architectures where you may have many projects interacting with each other. 2. **Blessed branch**: In this approach you only push configuration changes from a specific branch, like `main` in most git repositories. This approach is suitable in some simpler scenarios where you don't expect to change your deployment process very often. #### Space-per-branch approach In this approach you will be pushing configuration changes from **any branch** into a unique space in Octopus, using a naming convention. Here is one potential naming convention you could use: - `main` targets the space called `MySpace-main` (or just `MySpace` if you don't like the suffix). - `feature-rocksville` targets the space called `MySpace-rocksville`. - `feature-planetside` targets the space called `MySpace-planetside`. The general process should look something like this, tailored to your situation: 1. Create a new branch to isolate your changes, named something like `feature-rocksville`. 1. Make the changes on your branch. 1. In your build pipeline, push the changes to the correct space like `MySpace-rocksville`, creating the space if it doesn't exist already. 1. Test your changes in your space. 1. When you are happy the changes are safe to share, merge the `feature-rocksville` branch into `main` allowing those changes to flow through to the `MySpace-main` space. 1. Clean up by deleting the `feature-rocksville` space. #### Blessed branch approach In this approach you will be pushing configuration changes from **one specific branch** into Octopus. Using the example of git, you should only push changes from the `main` (default) branch into Octopus, and use [Channels](/docs/releases/channels) to safely introduce changes to your process and variables. The general process should look something like this, tailored to your situation: 1. Create a channel to match your branch, with package version rules to enforce the integrity of the release process. _You can create channels manually, or automatically as part of your build pipeline if that suits._ 1. Make the changes on your branch, making sure to scope your changes to your channel to avoid interrupting deployments from other branches. _You can scope each step, action and variable value to a specific channel for isolation._ 1. Get a peer to review your configuration change on your branch. 1. Merge your configuration change to the `main` branch so your changes are actually pushed into Octopus. 1. Test your change by deploying releases through your channel. a. If you are unhappy with your change, fix it in your branch, get a peer to review your new commits, merge the new commits to `main`, and repeat your testing. 1. When you are happy the changes are safe to share: a. Remove the channel scoping from your changes in your branch. b. Get a peer to review this final change. c. Merge your changes to the `main` branch. d. Test your brand new deployment through the `main` branch and your main channel. e. If something goes wrong, you can revert the single commit, isolating your changes back to your channel. :::div{.hint} If you have thoughts about how deployment as code could better support your organization, we would like to [talk with you about your dream scenario](https://octopus.com/support)! ::: ## Learn more - [Deployment patterns blog posts](https://octopus.com/blog/tag/deployment-patterns/1). # Windows Source: https://octopus.com/docs/deployments/windows.md Windows is a popular Operating system to deploy your software to. Out-of-the-box, Octopus provides built-in steps to deploy to Windows including Windows Services, IIS and more. # Add deployment targets Source: https://octopus.com/docs/getting-started/first-deployment/add-deployment-targets.md With Octopus, you can deploy software to: - Kubernetes - Windows - Linux - Azure - AWS - Offline package drop - Cloud region Regardless of where you’re deploying your software, these machines and services are known as your deployment targets. ## Add deployment target 1. From the left Deploy menu, click **Deployment Targets**. :::figure ![Deployment Targets page](/docs/img/getting-started/first-deployment/images/deployment-targets-page.png) ::: 2. Click **Add Deployment Target**. 3. Use the category tabs to filter by deployment target type. 4. Click **Add** on the deployment target you want to add. ### Name Give your deployment target a descriptive name, for example, `Hello world tutorial target`. ### Environments We’ll scope this deployment target to one environment. Later, you can add additional targets and scope them to your other environments. 5. Select **Development** from the **Environments** dropdown list. ### Target Tags Octopus uses target tags to select which deployment target a project should deploy to. Later, you’ll add the same target tag to your deployment process. You can deploy to multiple targets simply by adding this tag. 6. Add a new target tag by typing it into the field. For this example, we’ll use `tutorial-target`. :::figure ![Deployment target form](/docs/img/getting-started/first-deployment/images/deployment-target-form.png) ::: Fill in the other sections of the deployment target form. If you need guidance, please refer to the relevant documentation: - [Kubernetes](/docs/kubernetes/targets) - [Windows](/docs/infrastructure/deployment-targets/tentacle/windows) - [Linux](/docs/infrastructure/deployment-targets/linux) - [Azure](/docs/infrastructure/deployment-targets/azure) - [AWS](/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target) - [Offline package drop](/docs/infrastructure/deployment-targets/offline-package-drop) - [Cloud region](/docs/infrastructure/deployment-targets/cloud-regions) Next, let’s [deploy a sample package](/docs/getting-started/first-deployment/deploy-a-package) to your deployment target. ### All guides in this tutorial series 1. [First deployment](/docs/getting-started/first-deployment) 2. [Define and use variables](/docs/getting-started/first-deployment/define-and-use-variables) 3. [Approvals with manual interventions](/docs/getting-started/first-deployment/approvals-with-manual-interventions) 4. Add deployment targets (this page) 5. [Deploy a sample package](/docs/getting-started/first-deployment/deploy-a-package) ### Further reading for deployment targets - [Deployment Targets](/docs/infrastructure/deployment-targets) - [Targets Tags](/docs/infrastructure/deployment-targets/target-tags) - [Deployments](/docs/deployments) - [Patterns and Practices](/docs/deployments/patterns) # Add runbook deployment targets Source: https://octopus.com/docs/getting-started/first-runbook-run/add-runbook-deployment-targets.md [Getting Started - Deployment Targets](https://www.youtube.com/watch?v=CBws8yDaN4w) With Octopus Deploy, you can deploy software to Windows servers, Linux servers, Microsoft Azure, AWS, Kubernetes clusters, cloud regions, or an offline package drop. Regardless of where you're deploying your software, these machines and services are known as your deployment targets. Octopus organizes your deployment targets (the VMs, servers, and services where you deploy your software) into [environments](/docs/infrastructure/environments). 1. Navigate to **Infrastructure ➜ Deployment Targets** and click **Add Deployment Target**. 1. Select the type of deployment target you are adding. 1. Select the type of connection your deployment target will make, and follow the on-screen instructions. If you run into any issues, refer to the documentation for the type of deployment target you are configuring: - [Kubernetes](/docs/kubernetes/targets) - [Windows](/docs/infrastructure/deployment-targets/tentacle/windows) - [Linux](/docs/infrastructure/deployment-targets/linux) - [Azure](/docs/infrastructure/deployment-targets/azure) - [AWS](/docs/infrastructure/deployment-targets/amazon-ecs-cluster-target) - [Offline package drop](/docs/infrastructure/deployment-targets/offline-package-drop) - [Cloud region](/docs/infrastructure/deployment-targets/cloud-regions) As you configure your deployment targets, select the environment, they will belong to, and assign the target tag(s). Target tags ensure you deploy the right software to the correct deployment targets. Typical target tags include: - web-server - app-server - db-server [Getting Started - Machine Roles](https://www.youtube.com/watch?v=AU8TBEOI-0M) 1. Enter *dev-server-01* in the **Display Name** 2. In **Environments** select *Development*. 3. In your deployment target select enter in **hello-world** as the target tag. 4. Click on the **Save** button. ![Deployment target with roles](/docs/img/shared-content/concepts/images/target-with-roles.png) The next step of this guide will [update the runbook process](/docs/getting-started/first-runbook-run/define-the-runbook-process-for-targets) to run a script on those newly created runbook targets. **Further Reading** For further reading on deployment targets in Octopus Deploy please see: - [Deployment Targets](/docs/infrastructure/deployment-targets) - [Runbook Documentation](/docs/runbooks) - [Runbook Examples](/docs/runbooks/runbook-examples) # OpenID Connect Source: https://octopus.com/docs/infrastructure/accounts/openid-connect.md ## Configuration :::div{.info} If you are using Octopus Cloud, you will not need to do anything to expose the instance to the public internet, this is already configured for you. ::: To use federated credentials, your Octopus instance will need to have two anonymous URLs exposed to the public internet. - `https://server-host/.well-known/openid-configuration` - `https://server-host/.well-known/jwks` These must be exposed with anonymous access on HTTPS. Without this, the OpenID Connect protocol will not be able to complete the authentication flow. The hostname of the URL that these two endpoints are available on must either be configured under **Configuration->Nodes->Server Uri** or set as the first ListenPrefix in the server configuration. ## Authenticating using OpenID Connect with third party services and tools If you have a third-party service or tool that supports OpenID Connect, you can add any OIDC account variable into your projects variable set and use the `[account name].OpenIdConnect.Jwt` variable to get access to the request token that can be used for authenticating. The JWT for the account on a step or the target is available in the `Octopus.OpenIdConnect.Jwt` variable. ## Subject Keys {#subject-keys} When using OpenID Connect to authenticate to with external services, the Subject claim can have its contents customized. This allows you to grant resource access at a fine or coarse grained level in your Cloud host, depending on your requirements. The subject can be modified for the three different uses within Octopus: - [Deployments and Runbooks](#deployments-and-runbooks) - [Health Checks](#health-checks) - [Account Test](#account-test) - [Feeds](#feeds) ### Subject key parts - Only the requested keys for a **Subject** claim will be included in the generated **Subject** claim. - Any Octopus resource types included in the **Subject** claim will use the slug value for the Octopus resource. The slug value is generated from the name of the Octopus resource when it was created, and it can be edited on the edit page of the resource type. - If a requested key has no value (for example, **Tenant** on an untenanted deployment, or **Runbook** on a deployment), both the key and the value are dropped from the **Subject** claim. - The **Subject** claim parts will always be in the following order: - **Space** - **Project** - **Project Group** - **Runbook** - **Tenant** - **Environment** - **Target** - **Account** - **Type** - **Feed** ## Deployments and Runbooks {#deployments-and-runbooks} The **Subject** claim for a deployment or a runbook supports the following parts: - **Space** slug - **Project** slug - **Project Group** slug - **Runbook** slug - **Tenant** slug - **Environment** slug - **Account** slug - **Type** The default keys for a deployment and runbook are **Space**, **Project**, **Tenant**, and **Environment**. For a tenanted deployment, this produces `space:[space-slug]:project:[project-slug]:tenant:[tenant-slug]:environment:[environment-slug]`. For an untenanted deployment, the **Tenant** segment is dropped, giving `space:[space-slug]:project:[project-slug]:environment:[environment-slug]`. The value for the type is either `deployment` or `runbook`. When changing the **Subject** claim format for a deployment and runbook, the runbook value will not be included (if specified) when running a deployment. For example, in the **Default** space, you have a project called **Deploy Web App**, and a runbook called **Restart**. If you set the **Subject** claim format to `space`, `project`, `runbook` and `type`, when running a deployment the **Subject** claim will be `space:default:project:deploy-web-app:type:deployment`, and for the run of the runbook the **Subject** claim would be `space:default:project:deploy-web-app:runbook:restart:type:runbook`. This is using the default generated slug values for the space, project, and runbook. :::div{.warning} Make sure your cloud provider's trust policy matches the **Subject** your deployments actually produce (tenanted or untenanted, deployment or runbook), not just the keys you selected. ::: ## Health Checks {#health-checks} The Health Check **Subject** claim supports the **Space** slug, the **Account** slug, the **Target** slug and the **Type** The default format for a health check is `space:[space-slug]:target:[target-slug]:account:[account-slug]`. The value for the type is `health`. ## Account Test {#account-test} The Account Test **Subject** claim supports the **Space** slug, the **Account** slug and the **Type** The default format for an account test is `space:[space-slug]:account:[account-slug]`. ## Feeds {#feeds} The Feed **Subject** claim supports the **Space** slug and the **Feed** slug. This subject is the same across runbooks, deployments, release creation and feed searches. The default format for feeds is `space:[space-slug]:feed:[feed-slug]`. ## Context specific value claims {#context-specific-value-claims} In addition to the customizable subject claim, the JWT token will also include specific single-value claims for the deployment or runbook execution. Each of these claims will be prefixed with `https://octopus.com/claims/` and will represent all the values that can be included in the subject configuration. ```json { "aud": "api://default", "iss": "https://example.octopus.app/", "exp": 1234567890, "iat": 1234567890, "nbf": 1234567890, "jti": "abc", "https://octopus.com/claims/space": "space-slug", "https://octopus.com/claims/project": "project-slug", "https://octopus.com/claims/runbook": "runbook-slug", // only on a runbook run "https://octopus.com/claims/projectgroup": "project-group-slug", "https://octopus.com/claims/environment": "environment-slug", "https://octopus.com/claims/tenant": "tenant-slug", "https://octopus.com/claims/type": "deployment", // or runbook for a runbook run "https://octopus.com/claims/account": "account-slug", "sub": "space:[space-slug]:project:[project-slug]:environment:[environment-slug]" // tenant segment dropped because this example is untenanted } ``` :::div{.hint} These namespaced claims are only available in **Octopus 2026.1**. ::: # Offline package drop Source: https://octopus.com/docs/infrastructure/deployment-targets/offline-package-drop.md The offline package drop deployment target makes it possible for Octopus to bundle all the files needed to perform a deployment to a deployment target, even when a direct connection to the deployment target isn't always possible, for instance, if a security policy, compliance control, or network topology make a direct connection impossible. You can treat the offline package drop just like any other target, but instead of deploying the application Octopus will bundle up all the files needed to perform the deployment on the *actual* target server. :::div{.info} Offline package drop currently only supports Windows operating systems as the target machine. ::: ## Configuring the target {#target-configuration} Offline package drop is available as a deployment target. :::figure ![](/docs/img/infrastructure/deployment-targets/images/adding-new-offline-package-drop-target.png) ::: ![](/docs/img/infrastructure/deployment-targets/images/create-new-offline-package-drop-target-part2.png) ### Destination The executable bundle created when deploying to an offline package drop target can be persisted in one of two modes: #### Artifact {#OfflinePackageDrop-Artifact} The bundle can be zipped and attached as an [Octopus Artifact](/docs/projects/deployment-process/artifacts) to the deployment. It can then be downloaded when required. :::div{.hint} Octopus Cloud instances will almost certainly want to use _Artifact_ as the destination. ::: #### Drop folder {#drop-folder} The bundle can alternatively be configured to be written directly to a file-system path. Configure the drop folder path field with the [UNC path](http://en.wikipedia.org/wiki/Path_%28computing%29#Uniform_Naming_Convention) to the directory you wish your offline packages to be located. ### Sensitive-variables encryption password {#sensitive-variables-encryption-password} As a security measure, any sensitive variables are written to a separate file which is then encrypted. To perform the encryption/decryption, a password is required. If your project does not contain any sensitive-variables, this field may be left un-set. If a project is deployed to an offline package drop target which does not have an encryption password set, the deployment will fail with an indicative error. Please ensure you store your encryption password in a secure location, as you will require it when executing the batch file to perform the deployment on the target server. ### Applications directory {#applications-directory} The applications directory is the directory your packages will be extracted to, and is the location applications will execute from by default (if no custom-installation-location is set). On a regular Tentacle, this is set to `C:\Applications` by default. ### Octopus working directory {#octopus-working-directory} The Octopus working directory is a location where some supporting files (e.g. the deployment journal XML file) are stored. ## Building the offline package {#build-offline-package} When Octopus deploys to an offline package drop target it doesn't actually execute the deployment, but will create a folder structure complete with Packages, Scripts, Variable files, Calamari and a batch file to execute the deployment on the actual target server. ### Naming conventions #### Artifact destination When using _Artifact_ for the destination, the zip file will be named ``` {{Project Name}}.{{Environment Name}}.{{Offline Drop Target Name}}.{{Release Number}}.zip ``` or if it is a tenanted deployment then ``` {{Project Name}}.{{Environment Name}}.{{Tenant Name}}.{{Offline Drop Target Name}}.{{Release Number}}.zip ``` For example ``` OctoFX.Production.PWebOffline01.3.3.10827.zip ``` The directory structure inside the zip file will resemble: ``` | My Offline Drop Target.OctoFX.Deployments-2.cmd | My Offline Drop Target.OctoFX.Deployments-2.ps1 | +---Calamari | | Calamari.exe | | ... | +---Packages | OctoFX.TradingWebsite.3.0.298_B47863CDE8E3F24E95873F4B59FE990E.nupkg | +---Scripts | Remove from Load Balancer.ps1 | Return to load balancer.ps1 | \---Variables My Offline Drop Target.OctoFX.Remove from Load Balancer.variables.json My Offline Drop Target.OctoFX.Return to load balancer.variables.json My Offline Drop Target.OctoFX.Trading Website.variables.json ``` #### Drop folder destination An example of the directory structure which will be created when deploying to an offline package drop target configured with a Drop Folder destination is shown below. In this example, the Drop Folder was configured as `\\my-share\octopus-drops`. ``` \\my-share \---octopus-drops \---Development \---OctoFX \---3.0.298 | My Offline Drop Target.OctoFX.Deployments-2.cmd | My Offline Drop Target.OctoFX.Deployments-2.ps1 | +---Calamari | | Calamari.exe | | ... | +---Packages | OctoFX.TradingWebsite.3.0.298_B47863CDE8E3F24E95873F4B59FE990E.nupkg | +---Scripts | Remove from Load Balancer.ps1 | Return to load balancer.ps1 | \---Variables My Offline Drop Target.OctoFX.Remove from Load Balancer.variables.json My Offline Drop Target.OctoFX.Return to load balancer.variables.json My Offline Drop Target.OctoFX.Trading Website.variables.json ``` The offline package drop will be built and copied into a folder named by this convention: ``` {{YourConfiguredDropFolderPath}}\{{Environment}}\{{ProjectName}}\{{Release}} ``` For example: ``` \\my-share\octopus-drops\Production\Acme.Web\0.1 ``` The batch file to execute the deployment will be named with this convention: ``` {{MachineName}}.{{ProjectName}}.{{DeploymentId}}.cmd ``` For example: `AcmeProductionDrop.Acme.Web.Deployments-1.cmd` :::div{.success} **Using Sensitive Variables?** Usually the reason you need to use offline package drop is for some kind of security policy or compliance control. If you indicate any Variables as Sensitive they will be encrypted into a separate variable file so they are protected during transport. When you execute the deployment you will be prompted for the [sensitive-variables password](#sensitive-variables-encryption-password) that will be used to decrypt the sensitive values so they can be used as part of the deployment. ::: ## Deploying the offline package drop {#deploy-offline-package} :::div{.warning} **PowerShell 7.3 breaking change** **PSNativeCommandArgumentPassing** When this experimental feature is enabled PowerShell uses the `ArgumentList` property of the `StartProcessInfo` object rather than our current mechanism of reconstructing a string when invoking a native executable. The new behavior is a **breaking change** from current behavior. This will break the `ps1` script we create as part of the offline drop package, setting `$PSNativeCommandArgumentPassing` to `Legacy` will revert the behavior to the historic behavior and let our `ps1` script to work again. To learn more, please see the Microsoft [documentation](https://learn.microsoft.com/en-us/powershell/scripting/learn/experimental-features?view=powershell-7.3#psnativecommandargumentpassing). ::: To Deploy the offline package drop simply copy the entire folder for that release to the target server and execute the batch file. This will actually execute the deployment on the target server just like Tentacle would. # Configure and apply a Kubernetes Ingress Source: https://octopus.com/docs/kubernetes/steps/kubernetes-ingress.md [Ingress resources](https://oc.to/KubernetesIngressResource) provide a way to direct HTTP traffic to service resources based on the requested host and path. ## Ingress name Each Ingress resource must have a unique name, defined in the `Ingress name` field. ## Ingress host rules Ingress resources configure routes based on the host that the request was sent to. New hosts can be added by clicking the `Add Host Rule` button. The `Host` field defines the host where the request was sent to. This field is optional and if left blank will match all hosts. The `Add Path` button adds a new mapping between a request path and the Service resource port. The `Path` field is the path of the request to match. It must start with a `/`. The `Service Port` field is the port from the associated Service resource that the traffic will be sent to. The `Service Name` field is the name of the associated service to direct the traffic to. ## Ingress annotations Ingress resources only provide configuration. A Ingress Controller resource uses the Ingress configuration to direct network traffic within the Kubernetes cluster. There are many Ingress Controller resources available. [NGINX](https://oc.to/NginxIngressController) is a popular option, that is used by the [Azure AKS service](https://oc.to/AzureIngressController). Google Cloud provides its [own Ingress Controller resource](https://oc.to/GoogleCloudIngressController). A [third party Ingress Controller resource](https://oc.to/AwsIngressController) is available for AWS making use of the ALB service. The diagram below shows a typical configuration with Ingress and Ingress Controller resources. :::figure ![Ingress](/docs/deployments/kubernetes/ingress.svg) ::: :::div{.hint} There is no standard behavior to the creation of load balancers when configuring Ingress Controller resources. For example, the Google Cloud Ingress Controller will create a new load balancer for every Ingress resource. The [documentation](https://oc.to/GoogleCloudIngressFanOut) suggests to create a single Ingress resource to achieve a fan-out pattern that shares a single load balancer. On the other hand, the [NGINX Ingress Controller resource installation procedure](https://oc.to/NginxIngressControllerDocs) creates a single LoadBalancer Service resource that is shared by default. ::: Each of these different implementations is configured through the Ingress resource annotations. Annotations are key value pairs, and the values assigned to them depend on the Ingress resource that is being configured. The list below links to the documentation that describes the supported annotations. * [NGINX](https://oc.to/NginxIngressControllerAnnotations) * [Google Cloud](https://oc.to/GoogleCloudIngressControllerGithub) * [AWS](https://oc.to/AwsAlbAnnotations) A new annotation is defined by clicking the `Add Annotation` button. The `Name` field will provide suggested annotation names, but this list of suggestions is not exhaustive, and any name can be added. The `Value` field defines the annotation value. :::div{.hint} Annotation values are always considered to be strings. See this [GitHub issue](https://oc.to/KubernetesAnnotationStringsIssue) for more information. ::: The `Service Name` defines the name of the Service resource that this Ingress will send traffic to. ## Default rule When there are no matching ingress rules, traffic can be sent to the service configured as the default rule. The `Port` field defines the service port that traffic will be sent to, and the `Service name` defines the name of the Service resource to send traffic to. ## Ingress labels [Labels](https://oc.to/KubernetesLabels) are optional name/value pairs that are assigned to the Ingress resource. ## Learn more - [Kubernetes blog posts](https://octopus.com/blog/tag/kubernetes/1) :::div{.hint} **Step updates** **2024.1:** - `Deploy Kubernetes ingress resource` was renamed to `Configure and apply a Kubernetes Ingress`. ::: # Uptime SLO Source: https://octopus.com/docs/octopus-cloud/uptime-slo.md Each Octopus Cloud customer has their own Octopus Server delivered as a highly available, scalable, secure SaaS application hosted for you. Octopus Deploy manages maintenance and resource provisioning for these hosted servers, letting our customers focus on happy deployments. Octopus Cloud's monthly uptime SLO is 99.99%. We calculate uptime as 100% of the month, less all unplanned downtime. Planned maintenance is a key benefit of Octopus Cloud and is scheduled in advance, so we exclude it from our uptime SLO calculation. Other than in exceptional circumstances, planned maintenance occurs during the customer’s [maintenance window](/docs/octopus-cloud/maintenance-window). In the 9 months ending February 2025, Octopus Cloud customers averaged fewer than 9 minutes of downtime per week, including all scheduled maintenance. ## Uptime Track Record This table lists Octopus Cloud's monthly uptime statistics for the last 12 months. We list our achieved uptime percentage and weekly unplanned downtime duration. We also show these data points including planned maintenance. Data points measured at 95th percentile of all paid subscriptions. | Month | Uptime % | Weekly unplanned downtime | Uptime % incl. planned maintenance | Weekly downtime incl. planned maintenance | | :----- | ------: | ------: | ------: | ------: | | April 2026 | 99.9857% | 91s | 99.9186% | 497s | | March 2026 | 99.9973% | 21s | 99.8632% | 833s | | February 2026 | 99.9985% | 14s | 99.8517% | 903s | | January 2026 | 99.9954% | 35s | 99.892% | 658s | | December 2025 | 100% | 0s | 99.9562% | 266s | | November 2025 | 99.9702% | 182s | 99.8925% | 651s | | October 2025 | 99.9851% | 91s | 99.8663% | 812s | | September 2025 | 99.9989% | 7s | 99.8722% | 777s | | August 2025 | 99.9989% | 7s | 99.9281% | 441s | | July 2025 | 99.9992% | 7s | 99.9207% | 483s | | June 2025 | 99.9974% | 21s | 99.9307% | 420s | | May 2025 | 100% | 0s | 99.9125% | 532s | ### How we calculate uptime We calculate uptime as 100% minus the percentage of unplanned downtime seconds out of the total seconds in a calendar month. We measure all data points at the 95th percentile of all paid subscriptions (95% of customers experienced the listed measurement *or better*). We exclude downtime that arises from planned or customer-requested maintenance from our uptime SLO calculation, but we measure and report it separately for transparency. Some Octopus Cloud customers use [dynamic workers](/docs/infrastructure/workers/dynamic-worker-pools). As the name implies, these workers are dynamically assigned to a cloud instance and are spun up and down as required by the Deployment or Runbook executed. We exclude Dynamic Workers from our calculation of uptime. **“Downtime”** means a period where the customer instance is unavailable, according to Octopus Deploy's internal and external monitoring systems. **"Weekly unplanned downtime"** is shown as seconds per week. We use the month's total unplanned downtime duration measured at the 95th percentile of all paid subscriptions to calculate a weekly average duration. It excludes planned and customer-requested maintenance. **"Weekly downtime incl. planned maintenance"** is shown as seconds per week. We use the month's total downtime duration measured at the 95th percentile of all paid subscriptions to calculate a weekly average duration. It includes unplanned downtime, as well as downtime arising from planned and customer-requested maintenance. # Calamari Source: https://octopus.com/docs/octopus-rest-api/calamari.md Calamari is an [open-source](https://github.com/OctopusDeploy/Calamari), console-application. It supports many commands, which are responsible for performing deployment-steps. For example: ```bash Calamari deploy-package --package MyPackage.nupkg --variables Variables.json ``` Calamari has commands to support: - Deploying to Kubernetes via Helm/Kustomize/Yaml. - Deploying NuGet packages. - Running scripts (PowerShell, ScriptCS, Bash, F#). - Deploying packages to Cloud services (WebApps, Functions etc.). - Various other deployment related activities. On each deployment, if it is not already present, the latest version of the Calamari executable is pushed to wherever it needs to be. This may be to: - A Kubernetes Agent - A Tentacle. - Via SSH to a Linux machine. - A network-drive for Offline-Package-Drop targets. - Or locally on the Octopus Server for deploying to Azure targets. Deployments now proceed as follows: 1. Octopus acquires packages and generates variables files. 2. The packages and variables are pushed to the target, along with the latest version of Calamari (if it is not already present). 3. The deployment target invokes Calamari to perform each deployment step. 4. Calamari performs the deployment step. Since Calamari is open-source, you can see the actions that are performed during a deployment. For example, did you ever wonder what order conventions run in when deploying a package? ```csharp var conventions = new List { new ContributeEnvironmentVariablesConvention(), new ContributePreviousInstallationConvention(journal), new LogVariablesConvention(), new AlreadyInstalledConvention(journal), new ExtractPackageToApplicationDirectoryConvention(new LightweightPackageExtractor(), fileSystem, semaphore), new FeatureScriptConvention(DeploymentStages.BeforePreDeploy, fileSystem, embeddedResources, scriptCapability, commandLineRunner), new ConfiguredScriptConvention(DeploymentStages.PreDeploy, scriptCapability, fileSystem, commandLineRunner), new PackagedScriptConvention(DeploymentStages.PreDeploy, fileSystem, scriptCapability, commandLineRunner), new FeatureScriptConvention(DeploymentStages.AfterPreDeploy, fileSystem, embeddedResources, scriptCapability, commandLineRunner), new SubstituteInFilesConvention(fileSystem, substituter), new ConfigurationTransformsConvention(fileSystem, configurationTransformer), new ConfigurationVariablesConvention(fileSystem, replacer), new CopyPackageToCustomInstallationDirectoryConvention(fileSystem), new FeatureScriptConvention(DeploymentStages.BeforeDeploy, fileSystem, embeddedResources, scriptCapability, commandLineRunner), new PackagedScriptConvention(DeploymentStages.Deploy, fileSystem, scriptCapability, commandLineRunner), new ConfiguredScriptConvention(DeploymentStages.Deploy, scriptCapability, fileSystem, commandLineRunner), new FeatureScriptConvention(DeploymentStages.AfterDeploy, fileSystem, embeddedResources, scriptCapability, commandLineRunner), new LegacyIisWebSiteConvention(fileSystem, iis), new FeatureScriptConvention(DeploymentStages.BeforePostDeploy, fileSystem, embeddedResources, scriptCapability, commandLineRunner), new PackagedScriptConvention(DeploymentStages.PostDeploy, fileSystem, scriptCapability, commandLineRunner), new ConfiguredScriptConvention(DeploymentStages.PostDeploy, scriptCapability, fileSystem, commandLineRunner), new FeatureScriptConvention(DeploymentStages.AfterPostDeploy, fileSystem, embeddedResources, scriptCapability, commandLineRunner), }; ``` Calamari is published under the Apache license, you can find the source code [here](https://github.com/OctopusDeploy/Calamari). # octopus environment create Source: https://octopus.com/docs/octopus-rest-api/cli/octopus-environment-create.md Create a environment in Octopus Deploy ```text Usage: octopus environment create [flags] Aliases: create, new Flags: --allow-dynamic-infrastructure Allow dynamic infrastructure -d, --description string Description of the environment -n, --name string Name of the environment -t, --tag stringArray Tag to apply to environment, must use canonical name: / --use-guided-failure Use guided failure mode by default Global Flags: -h, --help Show help for a command --no-prompt Disable prompting in interactive mode -f, --output-format string Specify the output format for a command ("json", "table", or "basic") (default "table") -s, --space string Specify the space for operations ``` ## Examples :::div{.success} **Octopus Samples instance** Many of the examples we use, reference the [samples instance](https://samples.octopus.app/app#/users/sign-in) of Octopus Deploy. If you'd like to explore the samples instance, you can log in as a guest. ::: ```bash octopus environment create ``` ## Learn more - [Octopus CLI](/docs/octopus-rest-api/cli) - [Creating API keys](/docs/octopus-rest-api/how-to-create-an-api-key) # Deployments Source: https://octopus.com/docs/octopus-rest-api/examples/deployments.md You can use the REST API to create and manage your Octopus deployments. Typical tasks can include: # TeamCity Source: https://octopus.com/docs/packaging-applications/build-servers/teamcity.md [TeamCity](http://www.jetbrains.com/teamcity/) from JetBrains is a popular continuous integration server that supports a variety of different version control systems and build runners. Octopus Deploy and TeamCity can work together to make automated, continuous delivery easy. When using Octopus Deploy with TeamCity, TeamCity will usually be responsible for: - Checking for changes in source control. - Compiling the code. - Running unit tests. - Creating NuGet or Zip packages for deployment. Octopus Deploy will take those packages and to push them to development, test, and production environments. The Octopus TeamCity plugin comes with these custom build runners: 1. **Octopus Deploy: Pack** Create a NuGet or Zip formatted package. 2. **Octopus Deploy: Build Information** add information about the build, including work items and commit messages, that is then stored in Octopus Deploy. 3. **Octopus Deploy: Push Packages** Push packages to the Octopus Deploy [built-in repository](/docs/packaging-applications/package-repositories/built-in-repository/#pushing-packages-to-the-built-in-repository), optionally using the TeamCity zip feature to create packages on-the-fly. 4. **Octopus Deploy: Create Release** Creates a new release in Octopus Deploy, and optionally deploys it to an environment. 5. **Octopus Deploy: Deploy Release** Deploys an *existing* release to a new environment. 6. **Octopus Deploy: Promote Release** Promotes an *existing* release from one environment to another. The plugin is simply a wrapper for the [Octopus CLI](/docs/octopus-rest-api/octopus-cli), the Octopus command line tool for creating and deploying releases. ## Installing the Octopus TeamCity plugin The Octopus Deploy TeamCity plugin is available in the following places: - The [Jetbrains Plugin Repository](https://plugins.jetbrains.com/plugin/9038-octopus-deploy-integration). - The [Octopus Downloads page](https://octopus.com/downloads). - In TeamCity by navigating to **Administration ➜ Plugins List ➜ Browse plugin repository** and searching for **Octopus Deploy Integration**. The TeamCity documentation has instructions and options for [installing plugins](https://www.jetbrains.com/help/teamcity/installing-additional-plugins.html). ## Create packages with TeamCity Octopus supports multiple [package formats](/docs/packaging-applications/#supported-formats) for deploying your software. TeamCity can be configured to monitor your source control and package your applications when changes are made. You configure TeamCity to package your applications by creating a [build configuration](https://www.jetbrains.com/help/teamcity/creating-and-editing-build-configurations.html), and adding a step to the configuration of runner type, **Octopus Deploy: Pack**. 1. Give the step a name. 2. Enter the [package ID](/docs/packaging-applications/#package-id). 3. Select the type of **package format** you want to create, NuGet(default) or Zip. 4. Enter the **package version**. The package version cannot be a single number (learn about [version numbers in Octopus](/docs/packaging-applications/#version-numbers)). Make sure this evaluates to a multipart number, for instance, **1.1.3.**, or **1.0.%build.counter%** to include the build 5. Enter the **source path**. 6. Enter the **output path**. With these options selected, your packages will automatically be created using the version number of the current build. OctoPack will ensure these packages appear in the artifacts tab of TeamCity: ![](/docs/img/packaging-applications/build-servers/images/3278194.png) ## Using Octopus as a package repository \{#TeamCity-PushPackagesToOctopusUsingOctopusAsAPackageRepository} Octopus can be used as a [NuGet package repository](/docs/packaging-applications/package-repositories/built-in-repository), or it can be configured to use an external feed (such as retrieving them from TeamCity). To push packages to Octopus during the OctoPack phase, enter the NuGet endpoint URL into the **Publish packages to http** field, and [an API key](/docs/octopus-rest-api/how-to-create-an-api-key) in the **Publish API Key** field. OctoPack will then push the packages when the solution is built. You'll find the URL to your repository on the **Deploy ➜ Manage ➜ Packages** tab in Octopus. Simply click the `Show examples` link to see options to upload packages including the repository URL. ## Consuming the TeamCity NuGet feed in Octopus \{#TeamCity-ConsumeNuGetFeedInOctopusConsumingTheTeamCityNuGetFeedInOctopus} TeamCity 7 and above can act as a NuGet repository. You can enable this by navigating to **Administration ➜ NuGet Settings** and enabling the inbuilt NuGet server. Any build artifacts ending with `.nupkg` will automatically be served as NuGet packages, which Octopus can consume. ## Connect Octopus to your TeamCity Server 1. In the Octopus Web Portal navigate to **Library ➜ External Feeds**. 1. Click **ADD FEED**. 1. Leave the feed type as **NuGet Feed**. 1. Enter a name for the feed. 1. Enter the authenticated feed URL. 1. Click **SAVE**. Once added, the TeamCity feed will appear in the NuGet feed list. You can use the *Test* link to make sure that the NuGet package is available before creating your Octopus project. :::div{.success} **Tip: delayed package publishing** NuGet packages created from your build **won't appear in the TeamCity NuGet feed until after the build fully completes**. If you plan to trigger a deployment during a build, this creates a problem: the package won't be in the feed until the build is published, so you won't be able to deploy it. The solution is to configure a secondary build configuration, and use a snapshot dependency and build trigger in TeamCity to run the deployment build configuration after the first build configuration completes. ::: ## Creating and pushing packages from TeamCity to Octopus \{#TeamCity-CreateAndPushPackageToOctopusCreatingAndPushingPackagesFromTeamCityToOctopus} :::div{.hint} In version **4.38.0** of the TeamCity Plugin we have added a new build runner that can be used to package your applications as either a NuGet or Zip formatted package. ::: :::figure ![Octopus Pack](/docs/img/packaging-applications/build-servers/images/teamcity-pack-step.png) ::: :::div{.hint} In version **3.3.1** of the TeamCity Plugin we added a new build runner that can be used to package and push your applications from TeamCity to Octopus. ::: :::figure ![Octopus Push](/docs/img/packaging-applications/build-servers/images/5275665.png) ::: ## Pushing build information from TeamCity to Octopus \{#TeamCity-pushing-build-info-from-teamcity-to-octopus} Support for pushing information (metadata) to Octopus about the build has been available in the TeamCity plugin since version **5.1.3**. - In versions **5.1.3 ➜ 5.3.0**, the build runner was called **Octopus Deploy: Metadata** - In version **5.4.0** onwards, the build runner was renamed to **Octopus Deploy: Build Information** :::div{.info} When using build information in release notes in conjunction with [built-in package repository triggers (formerly known as _Automatic Release Creation_)](https://octopus.com/docs/projects/project-triggers/built-in-package-repository-triggers) the build information **must** be pushed to Octopus **before** the packages are pushed to Octopus as the release will be created as soon as the package configured for automatic release create is pushed. ::: :::figure ![Octopus Build information](/docs/img/packaging-applications/build-servers/images/teamcity-build-information-step.png) ::: :::div{.hint} A **Branch specification** is required on the TeamCity VCS configuration to populate the `Branch` field for Build Information in Octopus Deploy. For example, `+:*` ![](/docs/img/packaging-applications/build-servers/images/teamcity-branch-specification.png) The part of the branch name matched by the asterisk (`*`) wildcard becomes the short branch name to be displayed in the TeamCity user-level interface. This is also known as the **logical branch name** and it's this value that is passed to Octopus Deploy by the TeamCity Plugin. For more information about configuring a branch specification, see the [TeamCity - working with feature branches](https://www.jetbrains.com/help/teamcity/working-with-feature-branches.html#Configuring+branches) documentation. ::: ## Using the plugin with Linux build agents \{#TeamCity-LinuxAgentsUsingThePluginWithLinuxBuildAgents} Traditionally the Octopus TeamCity plugin required a Windows build agent to work. As of version 4.2.1 it will run on Linux build agents if they meet either of the following requirements: 1. [.NET Core](https://www.microsoft.com/net/core) must be installed on the build agent and in the PATH such that the `dotnet` command runs successfully. To install, follow the linked guide to install the .NET Core SDK for your distribution. Ensure that the `dotnet` command runs successfully. From version 4.15.10 of the plugin .NET Core v2 is required. 2. For Octopus CLI versions prior to `7.0.0` .Net Core must be installed as above. *Versions later than `7.0.0` are self contained and do not require .Net Core to be installed*. The Octopus CLI tool must be installed and in the PATH such that the `octo` command runs successfully. To install, download the .tar.gz for your system from the [Octopus download page](https://octopus.com/downloads), extract somewhere appropriate and symlink `octo` into your PATH. Again, ensure that `octo` runs successfully. On some platforms you may need to install [additional dependencies](https://docs.microsoft.com/en-gb/dotnet/core/install/dependencies?pivots=os-linux&tabs=netcore31#linux-distribution-dependencies). ## Learn more - Generate an Octopus guide for [TeamCity and the rest of your CI/CD pipeline](https://octopus.com/docs/guides?buildServer=TeamCity). # Google Cloud Container Registry (GCR) Source: https://octopus.com/docs/packaging-applications/package-repositories/guides/container-registries/google-container-registry.md Google Cloud provides a [container registry](https://cloud.google.com/container-registry). Google Container Registry can be configured in Octopus as a Docker Container Registry Feed. ## Adding a Google Container Registry to Octopus 1. To enable Octopus to communicate with Google Cloud registries, the [Cloud Resource Manager API](https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview) must be enabled. 2. Create a [JSON key file Google Cloud service account](https://cloud.google.com/container-registry/docs/advanced-authentication#json-key) 3. In Octopus go to **Deploy ➜ Manage ➜ External Feeds** and add a new feed with the following properties - **Feed Type:** Google Container Registry - **Name:** _{{This one's up to you}}_ - **URL:** `https://[REGION]-docker.pkg.dev` - **Credentials:** Google Cloud JSON Key - **Google Cloud JSON Key:** _{{Upload your JSON keyfile}}_ :::figure ![](/docs/img/packaging-applications/package-repositories/guides/container-registries/images/google-container-registry.png) ::: ## Adding an OpenID Connect Google Container Registry to Octopus Octopus Server `2025.2` adds support for OpenID Connect to GCR feeds. To use OpenID Connect authentication you have to follow the [required minimum configuration](/docs/infrastructure/accounts/openid-connect#configuration). To set up an OpenID Connect GCR feed: 1. Follow the [Google cloud documentation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) to create and configure a Workload Identity Federation. 2. Set the IAM access control on your Artifact Registry following the [access control instructions](https://cloud.google.com/artifact-registry/docs/access-control#:~:text=On%20the%20Permissions%20tab%2C%20click,prevent%20misuse%20by%20unauthenticated%20users). 3. In Octopus go to **Deploy ➜ Manage ➜ External Feeds** and add a new feed with the following properties - **Feed Type:** Google Container Registry - **Name:** _{{This one's up to you}}_ - **URL:** `https://[REGION]-docker.pkg.dev` - **Credentials:** OpenID Connect - **Subject:** *Please read [OpenID Connect Subject Identifier](/docs/infrastructure/accounts/openid-connect#subject-keys) for how to customize the **Subject** value* - **Audience** _{{The audience set on the workload identity provider}}_ *This should match the audience set on the Workload Identity Federation. By default, this is* `https://iam.googleapis.com/projects/{project-id}/locations/global/workloadIdentityPools/{pool-id}/providers/{provider-id}` :::div{.warning} At this time, OpenID Connect external feeds are not supported for use with Kubernetes containers. This is because the short-lived credentials they generate are not suitable for long-running workloads. ::: # .NET Configuration transforms Source: https://octopus.com/docs/projects/steps/configuration-features/configuration-transforms.md The .NET Configuration Transforms feature is one of the [configuration features](/docs/projects/steps/configuration-features/) you can enable as you define the [steps](/docs/projects/steps/) in your [deployment process](/docs/projects/deployment-process). If this feature is enabled, Tentacle will also look for any files that follow the Microsoft [web.config transformation process](https://msdn.microsoft.com/en-us/library/dd465326.aspx) – **even files that are not web.config files!**. :::figure ![.NET Configuration Transforms screenshot](/docs/img/projects/steps/configuration-features/configuration-transforms/images/configuration-transforms.png) ::: An example web.config transformation that removes the `` attribute is below: ```xml ``` :::div{.success} **Testing .NET configuration transforms** The team at [AppHarbor](https://appharbor.com/) created a useful tool to [help test .NET configuration file transformations](https://webconfigtransformationtester.apphb.com/). ::: ## Naming .NET configuration transform files This feature will run your .NET configuration transforms by on looking for transform files named with the following conventions. The .NET configuration transformation files can either be named `*.Release.config`, `*..config`, or `*..config` and will be executed in this order: 1. `*.Release.config` 2. `*..config` 3. `*..config` For an **ASP.NET Web Application**, suppose you have the following files in your package: - `Web.config` - `Web.Release.config` - `Web.Production.config` - `Web.Test.config` When deploying to an environment named "**Production**", Octopus will execute the transforms in this order: `Web.Release.config`, followed by `Web.Production.config`. For **other applications**, like Console or Windows Service applications, suppose you have the following in your package: - `YourService.exe.config` - `YourService.exe.Release.config` - `YourService.exe.Production.config` - `YourService.exe.Test.config` When deploying to an environment named "**Test**", Octopus will execute the transforms in this order: `YourService.exe.Release.config`, followed by `YourService.exe.Test.config`. :::div{.success} You can see how this is actually done by our [open source Calamari project](https://github.com/OctopusDeploy/Calamari/blob/master/source/Calamari.Shared/Deployment/Conventions/ConfigurationTransformsConvention.cs). ::: ## Windows Service and console application .NET configuration transforms need special treatment Octopus looks for configuration transform files that match your executable's configuration file. Visual Studio has built-in support for this scenario for ASP.NET Web Applications, but it doesn't offer the same support for Windows Services and Console applications, you will need to take care of this yourself. In Visual Studio your configuration file will be **`app.config`** and is renamed during the build process to match the executable, e.g., The **`app.config`** file for **`YourService.exe`** is renamed to **`YourService.exe.config`**. To make sure Octopus can run the .NET configuration transforms for your Windows Services and Console Applications: 1. Make sure you name your configuration transform files properly based on the target executable filename e.g., `YourService.exe.Release.config`, `YourService.exe.Production.config`. 2. Set the **Copy to Output Directory** property for the configuration transform files to **Copy If Newer**. 3. Double-check the package you build for deployment actually contains the **`YourService.exe.config`** and all expected configuration transform files. :::figure ![](/docs/img/projects/steps/configuration-features/configuration-transforms/images/console-support.png) ::: ## Additional .NET configuration transforms You might have additional transforms to run outside of Debug, Environment or Release. You can define these in the Additional transforms box. If defined, these transforms will run regardless of the state of the `Automatically run .NET configuration transformation files` check-box. :::figure ![](/docs/img/projects/steps/configuration-features/configuration-transforms/images/additional-transforms.png) ::: Octopus supports explicit, wildcard and relative path configuration transform definitions on any XML file with any file extension. Octopus will iterate through all files in all directories (ie, recursively) of your deployed application to find any matching files. Your target file also must exist; it will not be created by Octopus. As a general rule, you should not include the path to the files unless the transform file is in a different directory to the target, in which case it needs to be relative to the target file (as explained below in the relative path scenario). Absolute paths are supported for transform files, but not for target files. ### Explicit **Explicit .NET configuration transform** ```powershell Transform.config => Target.config ``` The above transform definition will apply **Transform.config** to **Target.config** when the files are in the same directory. ### Relative path **Relative path .NET configuration transform** ```powershell Path\Transform.config => Target.config ``` The above transform definition will apply **Transform.config** to **Target.config** when **Transform.config** is in the directory **Path** relative to **Target.config**. ### Wildcard Wildcards can be used to select any matching file. For example, **\*.config** will match **app.config** as well as **web.config**. They can be used anywhere in the transform filename (the left side), but only at the start of the destination filename (the right side). **Wildcard .NET configuration transform** ```powershell *.Transform.config => *.config ``` The above transform definition will apply **foo.Transform.config** to **foo.config** and **bar.Transform.config** to **bar.config**. **Wildcard .NET configuration transform** ```powershell *.Transform.config => Target.config ``` The above transform definition will apply **foo.Transform.config** and **bar.Transform.config** to **Target.config**. **Wildcard .NET configuration transform** ```powershell Transform.config => Path\*.config ``` The above transform definition will apply **Transform.config** to **foo.config** and **bar.config** when **foo.config** and **bar.config** are in the directory **Path** relative to **Transform.config**. :::div{.success} If you would like to define the order of all of your transformations, if you list them in the order of transformation inside the Additional transforms feature then Octopus will use that order to run the transforms. ::: ## Suppressing .NET configuration transformation errors Exceptions that are thrown by the Microsoft .NET configuration transformation process will be treated as errors by Octopus, failing the deployment. This typically involves explicit transformations for elements that don't exist in the source .config file and will surface with errors similar to the below: ``` Warning 14:56:06 (31:18) Argument 'debug' did not match any attributes Error 14:56:06 Object reference not set to an instance of an object. System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Web.XmlTransform.XmlTransformationLogger.LogWarning(XmlNode referenceNode, String message, Object[] messageArgs) at Microsoft.Web.XmlTransform.RemoveAttributes.Apply() at Microsoft.Web.XmlTransform.Transform.ApplyOnAllTargetNodes() Fatal 14:56:06 One or more errors were encountered when applying the .NET XML configuration transformation file: e:\Octopus\Applications\MyEnv\MyApp\1.0.0.1234\Web.Release.config. View the deployment log for more details, or set the special variable Octopus.Action.Package.IgnoreConfigTransformationErrors to True to ignore this error. ``` To suppress these errors and report them as informational only, use the `Octopus.Action.Package.IgnoreConfigTransformationErrors` variable defined in the [System Variables](/docs/projects/variables/system-variables) section of the documentation. ## PowerShell If these conventions aren't enough to configure your application, you can always [use PowerShell to perform custom configuration tasks](/docs/deployments/custom-scripts). Variables will be passed to your PowerShell script, and PowerShell has [rich XML API's](https://www.codeproject.com/Articles/61900/PowerShell-and-XML). # Updating step templates Source: https://octopus.com/docs/projects/updating-step-templates.md Step templates are effectively copied to projects using them. That means if you update a step template, you'll need to update the step in the project using it for your changes to have an effect. If your project is using an out-of-date step template, you'll see a warning when editing that step in the deployment process of your project. You can click the **Update** button to start using the latest version. :::figure ![Step Templates inline merge](/docs/img/projects/images/step-templates-inline-merge.png) ::: If you have a lot of projects using a step template, updating them one by one can be time-consuming. Fortunately, there is a way to update all of them at once. To do that, navigate to **Deploy ➜ Manage ➜ Step Templates ➜ Name of the Step Template ➜ Usage**. Once you are there you should see a list of steps that are using the step template. The steps that are not on the latest version will have an **Update...** button next to them. Steps can be updated individually or all at once by clicking the **Update all...** button. :::figure ![Step Template Usage](/docs/img/projects/images/step-templates-usage.png) ::: ## Merge conflicts :::figure ![Steps that need default values](/docs/img/projects/images/step-templates-update-all-defaults.png) ::: Most of the time the steps will be updated automatically but there will be cases when the update process will need some input from you. When this happens we will do our best to make sure you only have to update manually the steps that need it. All other steps will have an option to be updated automatically. ### Merge conflicts caused by new step template parameters One of the cases when we will need your assistance is when you add a new parameter without a default value. There is a reason why a new parameter is added and if we updated steps without having a default value we could break your deployments. This is why we ask you to provide default values that are missing or to confirm that you are ok to use empty values as default values. ### Merge conflicts caused by unsafe changes to the step template When you make a change to a step template that can't be applied automatically we will ask you to update each step manually. This should not happen often but when a type of a parameter changes or we don't have the previous version of the step template we need you to tell us what the correct merge result looks like. ## Manual merge The manual merge process shows you the current values of the properties and what we think the new values should be. The new values are editable and you can change them if they are incorrect. ![Steps that need to update manually](/docs/img/projects/images/step-templates-update-all-manual-merge.png) # Azure account variables Source: https://octopus.com/docs/projects/variables/azure-account-variables.md [Azure accounts](/docs/infrastructure/accounts/azure/) can be referenced in a project through a project [variable](/docs/projects/variables) of the type **Azure account**. The [Azure PowerShell](/docs/deployments/azure/running-azure-powershell) step will allow you to bind the account to an **Azure account** variable, using the [binding syntax](/docs/projects/variables/#use-variables-in-step-definitions). By using an variable for the account, you can have different accounts used across different environments or regions using [scoping](/docs/projects/variables/#use-variables-in-step-definitions). :::figure ![AWS Account variable](/docs/img/projects/variables/images/azure-account-variable.png) ::: The **Add Variable** window is then displayed and lists all the Azure accounts. Select the account that was created in the previous step to assign it to the variable. :::figure ![Azure account variable selection](/docs/img/projects/variables/images/azure-account-variable-selection.png) ::: ## Azure account variable properties The Azure account Variable also exposes the following properties that you can reference in a PowerShell script: ### Service Principal | Name and description | Example | | -------------------- | ------------------------| | **`SubscriptionNumber`**
      The Azure Subscription Id | cd21dc34-73dc-4c7d-bd86-041284e0bc45 | | **`Client`**
      The Azure Application Id | 57dfa713-f4c1-4b15-b21d-d14ff7941f7c | | **`Password`**
      The Client Secret for the Azure Application.
      Only set if **Use a Service Principal** is selected | correct horse battery staple | | **`OpenIdConnect.Jwt`**
      The JWT identity token for the current task
      Only set if **Use OpenID Connect** is selected | *(dynamically generated token)* | | **`TenantId`**
      The Azure Active Directory Tenant Id | 2a681dca-3230-4e01-abcb-b1fd225c0982 | | **`AzureEnvironment`**
      The Azure environment | AzureCloud, AzureGermanCloud, AzureChinaCloud, AzureUSGovernment | | **`ResourceManagementEndpointBaseUri`**
      Only set if explicitly set in the Account settings | https://management.microsoftazure.de/ | | **`ActiveDirectoryEndpointBaseUri`**
      Only set if explicitly set in the Account settings | https://login.microsoftonline.de/ | | **`Audience`**
      Federated credentials audience
      Only set if **Use OpenID Connect** is selected | api://AzureADTokenExchange | ### Management certificate | Name and description | Example| | -------------------- | ------------------------| | **`SubscriptionNumber`**
      The Azure Subscription Id | cd21dc34-73dc-4c7d-bd86-041284e0bc45 | | **`CertificateThumbprint`**
      The thumbprint of the certificate | | | **`ServiceManagementEndpointBaseUri`**
      | https://management.core.cloudapi.de | | **`ServiceManagementEndpointSuffix`**
      | core.cloudapi.de | | **`AzureEnvironment`**
      The Azure environment | AzureCloud, AzureGermanCloud, AzureChinaCloud, AzureUSGovernment | ### Accessing the properties in a script Each of the above properties can be referenced in any of the supported scripting languages such as PowerShell and Bash.
      PowerShell ```powershell # For an account with a variable name of 'azure account' # Using $OctopusParameters Write-Host 'AzureAccount.Id=' $OctopusParameters["azure account"] Write-Host 'AzureAccount.Client=' $OctopusParameters["azure account.Client"] # Directly as a variable Write-Host 'AzureAccount.Id=' #{Azure account.Id} Write-Host 'AzureAccount.Client='#{Azure account.Client} # For an OpenId Connect account Write-Host 'AzureAccount.OpenIdConnect.Jwt='#{Azure account.OpenIdConnect.Jwt} Write-Host 'AzureAccount.Audience='#{Azure account.Audience} ```
      Bash ```bash # For an account with a variable name of 'azure account' id=$(get_octopusvariable "azure account") client=$(get_octopusvariable "azure account.Client") echo "Azure Account Id is: $id" echo "Azure Account Client is: $client" # For an OpenID Connect account jwt=$(get_octopusvariable "azure account.OpenIdConnect.Jwt") audience=$(get_octopusvariable "azure account.Audience") echo "Azure Account JWT is: $jwt" echo "Azure Account OIDC Audience is: $audience" ```
      ## Learn more - [Variable blog posts](https://octopus.com/blog/tag/variables/1) # OCL Syntax for Config as Code Source: https://octopus.com/docs/projects/version-control/ocl-file-format.md ## About OCL Octopus Configuration Language (OCL) is based on a subset of Hashicorp Configuration Language (HCL). OCL files use the `.ocl` file extension, and are located in the base path defined in the projects version control settings. General information about the OCL format can be found [here](https://github.com/OctopusDeploy/Ocl), including the [EBNF notation](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form). ## Deployment Process The Deployment Process is defined in the `deployment_process.ocl` file. It consists of one or more steps. These steps are defined as blocks in OCL. ### `step` block Each step contains one label, which is the slug of the step. This must be unique throughout the process. ```hcl step "" { ... } ``` ### `step.name` The name of the step. If omitted, the name will default to the slug. ### `step.condition` Valid values: `Success`, `Failure`, `Always`, `Variable` Default: `Success` ### `step.package_requirement` Valid values: `LetOctopusDecide`, `BeforePackageAcquisition`, `AfterPackageAcquisition` Default: `LetOctopusDecide` ### `step.properties` Properties is a dictionary of key-value-pairs. Example: ```hcl properties = { Octopus.Account.Id = "My Awesome Account" MyCustomProperty = "My Value" ... } ``` ### `step.start_trigger` Valid values: `StartAfterPrevious`, `StartWithPrevious` Default: `StartAfterPrevious` ### `step.action` block Steps generally contain a single action. However, there are some cases where they can contain multiple steps. Actions are also defined as OCL blocks. ```hcl action "" { ... } ``` ### `step.action.action_type` Tells Octopus what type of action this is, e.g: `Octopus.Script`, `Octopus.Nginx`, `Octopus.AzureWebApp`, etc. ### `step.action.channels` A list of channel slugs which this action will be executed for. ```hcl channels = ["default", "pre-release"] ``` ### `step.action.condition` Valid values: `Success`, `Variable` Default: `Success` ### `step.action.environments` A list of environment slugs where this action will be executed. ```hcl environments = ["production", "staging"] ``` ### `step.action.excluded_environments` A list of environment slugs where this action will be excluded from execution. ```hcl excluded_environments = ["production", "staging"] ``` ### `step.action.is_disabled` Valid values: `True`, `False` Default: `False` ### `step.action.is_required` Valid values: `True`, `False` Default: `False` ### `step.action.notes` This field allows for any custom notes. ### `step.action.properties` Same as the Step `properties`. ### `step.action.step_package_version` ### `step.action.tenant_tags` A list of canonical tenant tag names which this action applies to. ```hcl tenant_tags = ["My Tenant/My Tag", "My Tenant/My Other Tag"] ``` ### `step.action.worker_pool` The slug of a worker pool where this action should execute. ```hcl worker_pool = "my-worker-pool" ``` ### `step.action.worker_pool_variable` The name of the variable pointing to a worker pool where this action should execute. ```hcl worker_pool_variable = "WorkerPoolVariable" ``` ### `step.action.container` block If the action should be executed in a container, the `container` block can be used to specify the container. ```hcl container "" { feed = "" } ``` ### `step.action.package` block Actions can reference packages using one or more `package` blocks. ```hcl packages "" { acquisition_location = "Server|ExecutionTarget|NotAcquired" feed = "" package_id = "" # Optional properties block, same as above properties properties = { = "" } # Optional: Todo step_package_inputs_reference_id = "" } ``` #### Example ```hcl step "Hello world (using PowerShell)" { action { action_type = "Octopus.Script" is_required = true properties = { Octopus.Action.RunOnServer = "true" Octopus.Action.Script.ScriptBody = "Write-Host 'Hello world, using PowerShell'" Octopus.Action.Script.ScriptSource = "Inline" Octopus.Action.Script.Syntax = "PowerShell" } worker_pool = "raspberry-pi-cluster" } } step "Hello world (using Bash)" { start_trigger = "StartWithPrevious" action { action_type = "Octopus.Script" is_required = true properties = { Octopus.Action.RunOnServer = "true" Octopus.Action.Script.ScriptBody = <<-EOT echo 'Hello world, using Bash' echo 'We also support multi-line scripts!' EOT Octopus.Action.Script.ScriptSource = "Inline" Octopus.Action.Script.Syntax = "Bash" } worker_pool = "raspberry-pi-cluster" } } ``` ## Variables The Variables are defined in the `variables.ocl` file. Variables are defined as blocks in OCL. ### `variable` block ```hcl variable "
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $packageFile = "path\to\package" $timeout = New-Object System.TimeSpan(0, 10, 0) # Load http assembly Add-Type -AssemblyName System.Net.Http # Create http client handler $httpClientHandler = New-Object System.Net.Http.HttpClientHandler $httpClient = New-Object System.Net.Http.HttpClient $httpClientHandler $httpClient.DefaultRequestHeaders.Add("X-Octopus-ApiKey", $octopusAPIKey) $httpClient.Timeout = $timeout # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Open file stream $fileStream = New-Object System.IO.FileStream($packageFile, [System.IO.FileMode]::Open) # Create disposition object $contentDispositionHeaderValue = New-Object System.Net.Http.Headers.ContentDispositionHeaderValue "form-data" $contentDispositionHeaderValue.Name = "fileData" $contentDispositionHeaderValue.FileName = [System.IO.Path]::GetFileName($packageFile) # Create steam content $streamContent = New-Object System.Net.Http.StreamContent $fileStream $streamContent.Headers.ContentDisposition = $contentDispositionHeaderValue $contentType = "multipart/form-data" $streamContent.Headers.ContentType = New-Object System.Net.Http.Headers.MediaTypeHeaderValue $contentType $content = New-Object System.Net.Http.MultipartFormDataContent $content.Add($streamContent) # Upload package $httpClient.PostAsync("$octopusURL/api/$($space.Id)/packages/raw?replace=false", $content).Result if ($null -ne $fileStream) { $fileStream.Close() } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $packageFile = "path\to\package" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint $fileStream = $null try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Create new package resource $package = New-Object Octopus.Client.Model.PackageResource # Create file stream object $fileStream = New-Object System.IO.FileStream($packageFile, [System.IO.FileMode]::Open) # Push package $repositoryForSpace.BuiltInPackageRepository.PushPackage($packageFile, $fileStream) } catch { Write-Host $_.Exception.Message } finally { if ($null -ne $fileStream) { $fileStream.Close() } } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; string packageFile = "path\\to\\file"; System.IO.FileStream fileStream = null; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Open file stream fileStream = new System.IO.FileStream(packageFile, System.IO.FileMode.Open); // Push package repositoryForSpace.BuiltInPackageRepository.PushPackage(packageFile, fileStream); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadLine(); return; } finally { if (fileStream != null) { fileStream.Close(); } } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" package_folder = '/folder/containing/package/' package_name = 'Package.Name.1.2.3.zip' uri = '{0}/spaces/all'.format(octopus_server_uri) response = requests.get(uri, headers=headers) response.raise_for_status() spaces = json.loads(response.content.decode('utf-8')) space = next((x for x in spaces if x['Name'] == space_name), None) with open('{0}{1}'.format(package_folder, package_name), 'rb') as package: uri = '{0}/{1}/packages/raw?replace=false'.format(octopus_server_uri, space['Id']) files = { 'fileData': (package_name, package, 'multipart/form-data', {'Content-Disposition': 'form-data'}) } response = requests.post(uri, headers=headers, files=files) response.raise_for_status() ```
Go ```go package main import ( "bytes" "fmt" "io" "log" "mime/multipart" "net/http" "net/url" "os" "path/filepath" "strconv" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" filePath := "path:\\to\\package.X.X.X.X.zip" // Get the space object space := GetSpace(apiURL, APIKey, spaceName) url := apiURL.String() + "/api/" + space.ID + "/packages/raw?replace=false" UploadPackage(filePath, url, APIKey) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func UploadPackage(filePath string, url string, APIKey string) { file, err := os.Open(filePath) if err != nil { log.Println(err) } body := &bytes.Buffer{} writer := multipart.NewWriter(body) part, err := writer.CreateFormFile("filedata", filepath.Base(file.Name())) if err != nil { log.Println(err) } io.Copy(part, file) writer.Close() request, err := http.NewRequest("POST", url, body) if err != nil { log.Println(err) } fileStats, err := file.Stat() if err != nil { log.Println(err) } fileSize := strconv.FormatInt(fileStats.Size(), 10) request.Header.Set("X-Octopus-ApiKey", APIKey) request.Header.Set("Upload-Offset", "0") request.Header.Set("Content-Length", fileSize) request.Header.Set("Upload-Length", fileSize) request.Header.Set("Content-Type", writer.FormDataContentType()) client := &http.Client{} response, err := client.Do(request) if err != nil { log.Println(err) } defer response.Body.Close() } ```
# Retrieve all feeds Source: https://octopus.com/docs/octopus-rest-api/examples/feeds/retrieve-feeds.md This script demonstrates how to programmatically retrieve all feeds from a Space in Octopus. ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the space to use ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get all feeds $feeds = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/feeds/all" -Headers $header) # Enumerate each feed foreach($feed in $feeds) { $feed } ```
PowerShell (Octopus.Client) ```powershell Add-Type -Path "C:\Octo\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint try { # Get space id $space = $repository.Spaces.FindByName($spaceName) Write-Host "Using Space named $($space.Name) with id $($space.Id)" # Create space specific repository $repositoryForSpace = [Octopus.Client.OctopusRepositoryExtensions]::ForSpace($repository, $space) # Get all feeds $feeds = $repositoryForSpace.Feeds.FindAll() # Enumerate each feed foreach($feed in $feeds) { $feed } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "Default"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get All feeds var feeds = repositoryForSpace.Feeds.FindAll(); foreach (var feed in feeds) { Console.WriteLine("Feed Id: {0}", feed.Id); Console.WriteLine("Feed Name: {0}", feed.Name); Console.WriteLine("Feed Type: {0}", feed.FeedType); Console.WriteLine(); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' uri = '{0}/spaces/all'.format(octopus_server_uri) response = requests.get(uri, headers=headers) response.raise_for_status() spaces = json.loads(response.content.decode('utf-8')) space = next((x for x in spaces if x['Name'] == space_name), None) uri = '{0}/{1}/feeds/all'.format(octopus_server_uri, space['Id']) response = requests.get(uri, headers=headers) response.raise_for_status() feeds = json.loads(response.content.decode('utf-8')) for feed in feeds: uri = feed.get('FeedUri', feed['FeedType']) print('{0} - {1} - {2}'.format(feed['Id'], feed['Name'], uri)) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get space space := GetSpace(apiURL, APIKey, spaceName) // Get space specific client client = octopusAuth(apiURL, APIKey, space.ID) // Get all feeds feeds, err := client.Feeds.GetAll() if err != nil { log.Println(err) } // Display all feeds for _, feed := range feeds { fmt.Printf("%[1]s: %[2]s - %[3]s \n", feed.GetID(), feed.GetName(), feed.GetFeedType()) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } ```
# Synchronize packages Source: https://octopus.com/docs/octopus-rest-api/examples/feeds/synchronize-packages.md This script synchronizes packages from the [built-in feed](/docs/packaging-applications/package-repositories/built-in-repository/) between two [spaces](/docs/administration/spaces). The spaces can be on the same Octopus instance, or in different instances. ## Usage Provide values for: - `VersionSelection` - the version selection of packages to sync. Choose from: - **FileVersions** - sync versions specified in the file specified by the `Path` parameter. - **LatestVersion** - sync the latest version of packages in the built-in feed. - **AllVersions** - sync all versions of packages in the built-in feed. - `PackageListFilePath` - the path to a file containing details of the packages and versions to sync. The file input format is: ```json [ { "Id": "WebApp1", "Versions": [ "1.0.0", "1.0.1" ] }, { "Id": "WebApp2", "Versions": [ "1.0.0", "1.0.2" ] } ] ``` - `SourceUrl` - Octopus URL used as the source for package synchronization. - `SourceApiKey` - Octopus API Key used with the source Octopus server. - `SourceSpace` - Name of the space to use from the source Octopus server. - `DestinationUrl` - Octopus URL used as the destination for package synchronization. - `DestinationApiKey` - Octopus API Key used with the destination Octopus server. - `DestinationSpace` - Name of the space to use for the destination Octopus server. - `CutOffDate` - *Optional* cut-off date for a package's published date to be included in the synchronization. ### Example usage This example takes packages specified in the `packages.json` file, finding all versions found in the source Octopus instance which have a published date greater than `2021-02-11` and synchronizing them with the destination Octopus instance: ```powershell ./SyncPackages.ps1 ` -VersionSelection AllVersions ` -PackageListFilePath "packages.json" ` -SourceUrl https://source.octopus.app ` -SourceApiKey "API-SOURCEKEY" ` -SourceSpace "Default" ` -DestinationUrl https://destination.octopus.app ` -DestinationApiKey "API-DESTKEY" ` -DestinationSpace "Default" ` -CutOffDate (Get-Date "2021-02-11") ``` ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; [CmdletBinding()] param ( [Parameter()] [ValidateSet("FileVersions", "LatestVersion", "AllVersions")] [string] $VersionSelection = "FileVersions", [Parameter(Mandatory, HelpMessage="See https://octopus.com/docs/octopus-rest-api/examples/feeds/synchronize-packages#usage for example file list structure.")] [string] $PackageListFilePath, [Parameter(Mandatory)] [string] $SourceUrl, [Parameter()] [string] $SourceDownloadUrl = $null, [Parameter(Mandatory)] [string] $SourceApiKey, [Parameter()] [string] $SourceSpace = "Default", [Parameter(Mandatory)] [string] $DestinationUrl, [Parameter(Mandatory)] [string] $DestinationApiKey, [Parameter()] [string] $DestinationSpace = "Default", [Parameter(HelpMessage="Optional cut-off date for a package's published date to be included in the synchronization. Expected data-type is a Date object e.g. 2020-12-16T19:31:25.650+00:00")] $CutoffDate = $null ) function Push-Package([string] $fileName, $package) { Write-Information "Package $fileName does not exist in destination" if ($null -eq $SourceDownloadUrl) { $sourceUrl = $sourceOctopusURL + $package.Links.Raw }else { $sourceUrl = $SourceDownloadUrl + $package.Links.Raw } Write-Verbose "Downloading $fileName from $sourceUrl..." $download = $sourceHttpClient.GetStreamAsync($sourceUrl).GetAwaiter().GetResult() $contentDispositionHeaderValue = New-Object System.Net.Http.Headers.ContentDispositionHeaderValue "form-data" $contentDispositionHeaderValue.Name = "fileData" $contentDispositionHeaderValue.FileName = $fileName $streamContent = New-Object System.Net.Http.StreamContent $download $streamContent.Headers.ContentDisposition = $contentDispositionHeaderValue $contentType = "multipart/form-data" $streamContent.Headers.ContentType = New-Object System.Net.Http.Headers.MediaTypeHeaderValue $contentType $content = New-Object System.Net.Http.MultipartFormDataContent $content.Add($streamContent) # Upload package Write-Verbose "Uploading $fileName to $destinationOctopusURL/api/$destinationSpaceId..." $upload = $destinationHttpClient.PostAsync("$destinationOctopusURL/api/$destinationSpaceId/packages/raw?replace=false", $content) while (-not $upload.AsyncWaitHandle.WaitOne(10000)) { Write-Verbose "Uploading $fileName..." } $streamContent.Dispose() } function Skip-Package([string] $filename, $package, $cutoffDate) { if ($null -eq $cutoffDate) { return $false; } if ($package.Published -lt $cutoffDate) { Write-Warning "$filename was published on $($package.Published), which is earlier than the specified cut-off date, and will be skipped" return $true; } return $false } function Get-Packages([string] $packageId, [int] $batch, [int] $skip) { $getPackagesToSyncUrl = "$sourceOctopusURL/api/$sourceSpaceId/packages?nugetPackageId=$($package.Id)&take=$batch&skip=$skip" Write-Host "Fetching packages from $getPackagesToSyncUrl" $packagesResponse = Invoke-RestMethod -Method Get -Uri "$getPackagesToSyncUrl" -Headers $sourceHeader return $packagesResponse; } function Get-PackageExists([string] $filename, $package) { Write-Host "Checking if $fileName exists in destination..." $checkForExistingPackageURL = "$destinationOctopusURL/api/$destinationSpaceId/packages/packages-$($package.Id).$($pkg.Version)" $statusCode = 500 try { if ($PSVersionTable.PSVersion.Major -lt 6) { $checkForExistingPackageResponse = Invoke-WebRequest -Method Get -Uri $checkForExistingPackageURL -Headers $destinationHeader -ErrorAction Stop } else { $checkForExistingPackageResponse = Invoke-WebRequest -Method Get -Uri $checkForExistingPackageURL -Headers $destinationHeader -SkipHttpErrorCheck } $statusCode = [int]$checkForExistingPackageResponse.BaseResponse.StatusCode } catch [System.Net.WebException] { $statusCode = [int]$_.Exception.Response.StatusCode } if ($statusCode -ne 404) { if ($statusCode -eq 200) { Write-Verbose "Package $fileName already exists on the destination. Skipping." return $true; } else { Write-Error "Unexpected status code $($statusCode) returned from $checkForExistingPackageURL" } } return $false; } # This script syncs packages from the built-in feed between two spaces. # The spaces can be on the same Octopus instance, or in different instances $ErrorActionPreference = "Stop" # ******* Variables to be specified before running ******** # Source Octopus instance details and credentials $sourceOctopusURL = $sourceUrl $sourceOctopusAPIKey = $sourceApiKey $sourceSpaceName = $sourceSpace # Destination Octopus instance details and credentials $destinationOctopusURL = $destinationUrl $destinationOctopusAPIKey = $destinationApiKey $destinationSpaceName = $destinationSpace # ***************************************************** # Get spaces $sourceHeader = @{ "X-Octopus-ApiKey" = $sourceOctopusAPIKey } $sourceSpaceId = ((Invoke-RestMethod -Method Get -Uri "$sourceOctopusURL/api/spaces/all" -Headers $sourceHeader) | Where-Object { $_.Name -eq $sourceSpaceName }).Id $destinationHeader = @{ "X-Octopus-ApiKey" = $destinationOctopusAPIKey } $destinationSpaceId = ((Invoke-RestMethod -Method Get -Uri "$destinationOctopusURL/api/spaces/all" -Headers $destinationHeader) | Where-Object { $_.Name -eq $destinationSpaceName }).Id # Create HTTP clients $httpClientTimeoutInMinutes = 60 if (-not('System.Net.Http.HttpClient' -as [type])) { try { Write-Warning "System.Net.Http.HttpClient type not found. Trying to load System.Net.Http assembly" Add-Type -AssemblyName System.Net.Http } catch { Write-Error "Can't load required System.Net.Http Assembly!" exit 1 } } $sourceHttpClient = New-Object System.Net.Http.HttpClient $sourceHttpClient.DefaultRequestHeaders.Add("X-Octopus-ApiKey", $sourceOctopusAPIKey) $sourceHttpClient.Timeout = New-TimeSpan -Minutes $httpClientTimeoutInMinutes $destinationHttpClient = New-Object System.Net.Http.HttpClient $destinationHttpClient.DefaultRequestHeaders.Add("X-Octopus-ApiKey", $destinationOctopusAPIKey) $destinationHttpClient.Timeout = New-TimeSpan -Minutes $httpClientTimeoutInMinutes $totalSyncedPackageCount = 0 $totalSyncedPackageSize = 0 Write-Host "Syncing packages between $sourceOctopusURL and $destinationOctopusURL" $packages = Get-Content -Path $PackageListFilePath | ConvertFrom-Json # Iterate supplied package IDs foreach ($package in $packages) { Write-Host "Syncing $($package.Id) packages (published after $cutoffDate)" $processedPackageCount = 0 $skip = 0; $batchSize = 100; if ($VersionSelection -eq 'AllVersions') { do { $packagesResponse = Get-Packages $package.Id $batchSize $skip foreach ($pkg in $packagesResponse.Items) { Write-Host "Processing $($pkg.PackageId).$($pkg.Version)" $fileName = "$($pkg.PackageId).$($pkg.Version)$($pkg.FileExtension)" if (-not (Skip-Package $fileName $pkg $CutoffDate)) { if (Get-PackageExists $fileName $package) { $processedPackageCount++ continue; } else { Push-Package $fileName $pkg $processedPackageCount++ $totalSyncedPackageCount++ $totalSyncedPackageSize += $pkg.PackageSizeBytes } } else { $processedPackageCount++ } } $skip = $skip + $packagesResponse.Items.Count } while ($packagesResponse.Items.Count -eq $batchSize) } elseif ($VersionSelection -eq 'LatestVersion') { $packagesResponse = Get-Packages $package.Id 1 0 $pkg = $packagesResponse.Items | Select-Object -First 1 if ($null -ne $pkg) { $fileName = "$($pkg.PackageId).$($pkg.Version)$($pkg.FileExtension)" if (-not (Skip-Package $fileName $pkg $CutOffDate)) { if (Get-PackageExists $fileName $package) { $processedPackageCount++ continue; } else { Push-Package $fileName $pkg $processedPackageCount++ $totalSyncedPackageCount++ $totalSyncedPackageSize += $pkg.PackageSizeBytes } } } } elseif ($VersionSelection -eq "FileVersions") { $versions = $package.Versions; do { $packagesResponse = Get-Packages $package.Id $batchSize $skip foreach ($pkg in $packagesResponse.Items) { if ($versions.Contains($pkg.Version)) { Write-Host "Processing $($pkg.PackageId).$($pkg.Version)" $fileName = "$($pkg.PackageId).$($pkg.Version)$($pkg.FileExtension)" if (-not (Skip-Package $fileName $pkg $CutoffDate)) { if (Get-PackageExists $fileName $package) { $processedPackageCount++ continue; } else { Push-Package $fileName $pkg $processedPackageCount++ $totalSyncedPackageCount++ $totalSyncedPackageSize += $pkg.PackageSizeBytes } } else { $processedPackageCount++ } } } $skip = $skip + $packagesResponse.Items.Count } while ($packagesResponse.Items.Count -eq $batchSize) } Write-Host "$fileName sync complete. $processedPackageCount/$($packagesResponse.TotalResults)" } Write-Host "Sync complete. $totalSyncedPackageCount packages ($("{0:n2}" -f ($totalSyncedPackageSize/1MB)) megabytes) were copied." -ForegroundColor Green ```
# Create a lifecycle Source: https://octopus.com/docs/octopus-rest-api/examples/lifecycles/create-lifecycle.md This script demonstrates how to programmatically create a lifecycle in Octopus Deploy. ## Usage Provide values for: - Octopus URL - Octopus API Key - Space ID of the space to use - Name of the lifecycle to create ## Script
PowerShell (REST API) ```powershell function Get-OctopusItems { # Define parameters param( $OctopusUri, $ApiKey, $SkipCount = 0 ) # Define working variables $items = @() $skipQueryString = "" $headers = @{"X-Octopus-ApiKey"="$ApiKey"} # Check to see if there is already a querystring if ($octopusUri.Contains("?")) { $skipQueryString = "&skip=" } else { $skipQueryString = "?skip=" } $skipQueryString += $SkipCount # Get initial set $resultSet = Invoke-RestMethod -Uri "$($OctopusUri)$skipQueryString" -Method GET -Headers $headers # Check to see if it returned an item collection if ($resultSet.Items) { # Store call results $items += $resultSet.Items # Check to see if result set is bigger than page amount if (($resultSet.Items.Count -gt 0) -and ($resultSet.Items.Count -eq $resultSet.ItemsPerPage)) { # Increment skip count $SkipCount += $resultSet.ItemsPerPage # Recurse $items += Get-OctopusItems -OctopusUri $OctopusUri -ApiKey $ApiKey -SkipCount $SkipCount } } else { return $resultSet } # Return results return $items } $OctopusUrl = 'https://your-octopus-url' # Your Octopus Server address $apikey = 'API-YOUR-KEY' # Get this from your profile $spaceName = "Default" # Create headers for API calls $headers = @{"X-Octopus-ApiKey"="$ApiKey"} $lifecycleName = "MyLifecycle" # Get space $space = (Get-OctopusItems -OctopusUri "$octopusURL/api/spaces" -ApiKey $ApiKey) | Where-Object {$_.Name -eq $spaceName} # Get lifecycles $lifecycles = Get-OctopusItems -OctopusUri "$octopusURL/api/$($space.Id)/lifecycles" -ApiKey $apikey # Check to see if lifecycle already exists if ($null -eq ($lifecycles | Where-Object {$_.Name -eq $lifecycleName})) { # Create payload $jsonPayload = @{ Id = $null Name = $lifecycleName SpaceId = $space.Id Phases = @() ReleaseRetentionPolicy = @{ ShouldKeepForever = $true QuantityToKeep = 0 Unit = "Days" } TentacleRetentionPolicy = @{ ShouldKeepForever = $true QuantityToKeep = 0 Unit = "Days" } Links = $null } # Create new lifecycle Invoke-RestMethod -Method Post -Uri "$OctopusUrl/api/$($space.Id)/lifecycles" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers $headers } else { Write-Host "$lifecycleName already exists." } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $lifecycleName = "MyLifecycle" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Check to see if lifecycle already exists if ($null -eq $repositoryForSpace.Lifecycles.FindByName($lifecycleName)) { # Create new lifecycle $lifecycle = New-Object Octopus.Client.Model.LifecycleResource $lifecycle.Name = $lifecycleName $repositoryForSpace.Lifecycles.Create($lifecycle) } else { Write-Host "$lifecycleName already exists." } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System; using System.Linq; // If using .net Core, be sure to add the NuGet package of System.Security.Permissions var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var lifecycleName = "MyLifecycle"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get space var space = repository.Spaces.FindByName(spaceName); var spaceRepository = client.ForSpace(space); if (null == spaceRepository.Lifecycles.FindByName(lifecycleName)) { // Create new lifecycle var lifecycle = new Octopus.Client.Model.LifecycleResource(); lifecycle.Name = lifecycleName; spaceRepository.Lifecycles.Create(lifecycle); } else { Console.Write(string.Format("{0} already exists.", lifecycleName)); } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" lifecycle_name = "MyLifecycle" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get lifecycles uri = '{0}/api/{1}/lifecycles'.format(octopus_server_uri, space['Id']) lifecycles = get_octopus_resource(uri, headers) lifecycle = next((x for x in lifecycles if x['Name'] == lifecycle_name), None) # Check to see if lifecycle already exists if None == lifecycle: # Create new lifecycle lifecycle = { 'Id': None, 'Name': lifecycle_name, 'SpaceId': space['Id'], 'Phases': [], 'ReleaseRetentionPolicy': { 'ShouldKeepForever': True, 'QuantityToKeep': 0, 'Unit': 'Days' }, 'TentacleRetentionPolicy': { 'ShouldKeepForever': True, 'QuantityToKeep': 0, 'Unit': 'Days' }, 'Links': None } response = requests.post(uri, headers=headers, json=lifecycle) response.raise_for_status() else: print ('{0} already exists.'.format(lifecycle_name)) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "MySpace" lifecycleName := "MyLifecycle" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Check to see if the lifecycle already exists if GetLifecycle(apiURL, APIKey, space, lifecycleName, 0) == nil { lifecycle := CreateLifecycle(apiURL, APIKey, space, lifecycleName) fmt.Println(lifecycle.Name + " created successfully") } else { fmt.Println(lifecycleName + " already exists.") } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetLifecycle(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, LifecycleName string, skip int) *octopusdeploy.Lifecycle { client := octopusAuth(octopusURL, APIKey, space.ID) lifecycleQuery := octopusdeploy.LifecyclesQuery { PartialName: LifecycleName, } lifecycles, err := client.Lifecycles.Get(lifecycleQuery) if err != nil { log.Println(err) } if len(lifecycles.Items) == lifecycles.ItemsPerPage { // call again lifecycle := GetLifecycle(octopusURL, APIKey, space, LifecycleName, (skip + len(lifecycles.Items))) if lifecycle != nil { return lifecycle } } else { // Loop through returned items for _, lifecycle := range lifecycles.Items { if lifecycle.Name == LifecycleName { return lifecycle } } } return nil } func CreateLifecycle(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, name string) *octopusdeploy.Lifecycle { client := octopusAuth(octopusURL, APIKey, space.ID) lifecycle := octopusdeploy.NewLifecycle(name) client.Lifecycles.Add(lifecycle) return lifecycle } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Lifecycle; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.lifecycle.LifecycleResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class CreateLifecycle { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final String lifecycleName = "TheLifecycleName"; if (space.get().lifecycles().getByName(lifecycleName).isPresent()) { System.out.println("Lifecycle called 'TheLifecycleName' already exists"); return; } final Lifecycle createdLifecycle = space.get().lifecycles().create(new LifecycleResource(lifecycleName)); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Create a project group Source: https://octopus.com/docs/octopus-rest-api/examples/project-groups/create-projectgroup.md This script demonstrates how to programmatically create a project group in Octopus Deploy. ## Usage Provide values for: - Octopus URL - Octopus API Key - Space ID of the space to use - Name of the project group to create ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; function Get-OctopusItems { # Define parameters param( $OctopusUri, $ApiKey, $SkipCount = 0 ) # Define working variables $items = @() $skipQueryString = "" $headers = @{"X-Octopus-ApiKey"="$ApiKey"} # Check to see if there is already a querystring if ($octopusUri.Contains("?")) { $skipQueryString = "&skip=" } else { $skipQueryString = "?skip=" } $skipQueryString += $SkipCount # Get initial set $resultSet = Invoke-RestMethod -Uri "$($OctopusUri)$skipQueryString" -Method GET -Headers $headers # Check to see if it returned an item collection if ($resultSet.Items) { # Store call results $items += $resultSet.Items # Check to see if result set is bigger than page amount if (($resultSet.Items.Count -gt 0) -and ($resultSet.Items.Count -eq $resultSet.ItemsPerPage)) { # Increment skip count $SkipCount += $resultSet.ItemsPerPage # Recurse $items += Get-OctopusItems -OctopusUri $OctopusUri -ApiKey $ApiKey -SkipCount $SkipCount } } else { return $resultSet } # Return results return $items } # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectGroupName = "MyProjectGroup" $projectGroupDescription = "MyDescription" # Get spaces $spaces = Get-OctopusItems -OctopusUri "$octopusURL/api/spaces" -ApiKey $octopusAPIKey $space = $spaces | Where-Object { $_.Name -eq $spaceName } # Create project group payload $projectGroupJson = @{ Id = $null Name = $projectGroupName EnvironmentIds = @() Links = $null RetentionPolicyId = $null Description = $projectGroupDescription } # Create project group Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/projectgroups" -Body ($projectGroupJson | ConvertTo-Json -Depth 10) -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $projectGroupName = "MyProjectGroup" $projectGroupDescription = "MyDescription" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Create project group object $projectGroup = New-Object Octopus.Client.Model.ProjectGroupResource $projectGroup.Description = $projectGroupDescription $projectGroup.Name = $projectGroupName $projectGroup.EnvironmentIds = $null $projectGroup.RetentionPolicyId = $null $repositoryForSpace.ProjectGroups.Create($projectGroup) ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var projectGroupName = "MyProjectGroup"; var projectGroupDescription = "My Description"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get space var space = repository.Spaces.FindByName(spaceName); var spaceRepository = client.ForSpace(space); // Create project group object var projectGroup = new Octopus.Client.Model.ProjectGroupResource(); projectGroup.Description = projectGroupDescription; projectGroup.Name = projectGroupName; projectGroup.EnvironmentIds = null; projectGroup.RetentionPolicyId = null; // Create the project group spaceRepository.ProjectGroups.Create(projectGroup); ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" project_group_name = "MyProjectGroup" project_group_description = "My description" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Create json project_group_json = { 'Id': None, 'Name': project_group_name, 'EnvironmentIds': None, 'Links': None, 'RetentionPolicyId': None, 'Description': project_group_description } # Create project group uri = '{0}/api/{1}/projectgroups'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=project_group_json) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectGroupName := "MyProjectGroup" projectGroupDescription := "My description" // Get space space := GetSpace(apiURL, APIKey, spaceName) // Create client client := octopusAuth(apiURL, APIKey, space.ID) // Create project group object projectGroup := octopusdeploy.NewProjectGroup(projectGroupName) projectGroup.Description = projectGroupDescription projectGroup.EnvironmentIDs = nil projectGroup.RetentionPolicyID = octopusdeploy.NewDisplayInfo().Label // Create project group client.ProjectGroups.Add(projectGroup) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.ProjectGroup; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.projectgroup.ProjectGroupResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class CreateProjectGroup { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final ProjectGroupResource projectGroupResource = new ProjectGroupResource("TheProjectGroupName"); final ProjectGroup createdProjectGroup = space.get().projectGroups().create(projectGroupResource); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Coordinating multiple projects Source: https://octopus.com/docs/octopus-rest-api/examples/projects/coordinating-multiple-projects.md These samples show how to perform various tasks related to project coordination. See the [OctopusDeploy-Api](https://github.com/OctopusDeploy/OctopusDeploy-Api) repository for further API documentation and examples using the [raw REST API](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/REST/PowerShell) or Octopus.Client in [C#](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client/Csharp), [PowerShell](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client/PowerShell) or [LINQPad](https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/Octopus.Client/LINQPad). :::div{.success} These examples use the [Octopus.Client](/docs/octopus-rest-api/octopus.client/) library, see the [Loading in an Octopus Step](/docs/octopus-rest-api/octopus.client/using-client-in-octopus/) section of the [Octopus.Client](/docs/octopus-rest-api/octopus.client) documentation for details on how to load the library from inside Octopus using PowerShell or C# Script steps. ::: ## Querying the current state The best way to get the current state for one or more projects is to use the Dashboard API, which is also used by the dashboards in the WebUI: **Octopus.Client** ```csharp var globalDashboard = repository.Dashboards.GetDashboard().Items; var projectDashboard = repository.Dashboards.GetDynamicDashboard(projects, environments).Items ``` **PowerShell** ```powershell $repository.Dashboards.GetDashboard().Items ``` **Http** ```js http://localhost/api/dashboard ``` ## Viewing recent deployments The following code returns the deployments started in the last 7 days: ```csharp var projects = repository.Projects.FindAll().Select(p => p.Id).ToArray(); var environments = repository.Environments.FindAll().Select(e => e.Id).ToArray(); List recentDeployments = new List(); var after = DateTimeOffset.Now.AddDays(-7); repository.Deployments.Paginate(projects, environments, page => { recentDeployments.AddRange(page.Items.Where(d => d.Created >= after)); // Deployments are returned most recent first return page.Items.All(i => i.Created >= after); } ); ``` ## Promoting a group of projects This example finds all the releases that are in UAT but not Production. It then queues them for deployment to Production and waits for them to complete. ```csharp var environments = repository.Environments.GetAll(); var testEnvId = environments.First(e => e.Name == "UAT").Id; var prodEnvId = environments.First(e => e.Name == "Prod").Id; var current = repository.Dashboards.GetDashboard().Items; var toBePromoted = from d in current where d.EnvironmentId == testEnvId && d.State == TaskState.Success let prod = current.FirstOrDefault(p => p.EnvironmentId == prodEnvId && p.ProjectId == d.ProjectId && p.TenantId == d.TenantId) where prod == null || prod.ReleaseId != d.ReleaseId select new DeploymentResource { ProjectId = d.ProjectId, ReleaseId = d.ReleaseId, ChannelId = d.ChannelId, TenantId = d.TenantId, EnvironmentId = prodEnvId }; var tasks = toBePromoted .Select(d => repository.Deployments.Create(d)) .Select(d => repository.Tasks.Get(d.TaskId)) .ToArray(); repository.Tasks.WaitForCompletion(tasks, timeoutAfterMinutes: 0); var completed = repository.Tasks.Get(tasks.Select(t => t.Id).ToArray()); if(completed.Any(c => c.State != TaskState.Success)) throw new Exception("One or more projects did not complete successfully"); ``` ## Queuing a project to run later This example re-queues the currently executing project at 3am the next day. ```csharp var releaseId = OctopusParameters["Octopus.Web.ReleaseLink"].Split('/').Last(); var tomorrow3amServerTime = new DateTimeOffset(DateTimeOffset.Now.Date, DateTimeOffset.Now.Offset).AddDays(1).AddHours(3); repository.Deployments.Create( new DeploymentResource() { ReleaseId = releaseId, ProjectId = OctopusParameters["Octopus.Project.Id"], ChannelId = OctopusParameters["Octopus.Release.Channel.Id"], EnvironmentId = OctopusParameters["Octopus.Environment.Id"], QueueTime = tomorrow3amServerTime } ); Console.WriteLine($"Queued for {tomorrow3amServerTime}"); ``` ## Failing a deployment if another deployment is running This example uses the dynamic dashboard API to check whether a different project is currently deploying to the same environment. Note that Octopus [restricts](/docs/administration/managing-infrastructure/run-multiple-processes-on-a-target-simultaneously) what can run at the same time already. ```csharp var otherProject = repository.Projects.FindByName("Other Project"); var environmentId = OctopusParameters["Octopus.Environment.Id"]; var dash = repository.Dashboards.GetDynamicDashboard(new[] { otherProject.Id }, new[] { environmentId }); if (dash.Items.Any(i => i.State == TaskState.Queued || i.State == TaskState.Executing)) throw new Exception($"{otherProject.Name} is currently queued or executing"); ``` ## Failing a deployment if a dependency is not deployed This example retrieves the last release to the same environment of a different project and fails if it is not the expected release version. ```csharp var requiredVersion = OctopusParameters["OtherProjectRequiredVersion"]; var otherProject = repository.Projects.FindByName("Other Project"); var environmentId = OctopusParameters["Octopus.Environment.Id"]; var dash = repository.Dashboards.GetDynamicDashboard(new[] { otherProject.Id }, new[] { environmentId }); var last = dash.Items.SingleOrDefault(i => i.IsCurrent); if (last == null || last.ReleaseVersion != requiredVersion) throw new Exception($"This project requires version {requiredVersion} of {otherProject.Name} to be deployed to the same environment"); ``` ## Triggering and waiting for another project This example finds the latest release for a different project and deploys it if it is not currently deployed to the environment. ```csharp var environmentId = OctopusParameters["Octopus.Environment.Id"]; var otherProject = repository.Projects.FindByName("Other Project"); var latestRelease = repository.Projects.GetReleases(otherProject).Items.FirstOrDefault(); var dash = repository.Dashboards.GetDynamicDashboard(new[] { otherProject.Id }, new[] { environmentId }); var last = dash.Items.Single(i => i.IsCurrent); if (latestRelease != null && last.ReleaseId != latestRelease.Id) { var deployment = repository.Deployments.Create( new DeploymentResource() { ReleaseId = latestRelease.Id, ProjectId = latestRelease.ProjectId, ChannelId = latestRelease.ChannelId, EnvironmentId = environmentId, } ); var task = repository.Tasks.Get(deployment.TaskId); repository.Tasks.WaitForCompletion(task); } ``` ## Waiting for another project to reach a certain stage This example builds on the previous, by waiting until a particular step is complete instead of the whole task. ```csharp // instead of the line repository.Tasks.WaitForCompletion(task); ActivityStatus step1Status = ActivityStatus.Pending; do { Thread.Sleep(1000); var details = repository.Tasks.GetDetails(task); var log = details.ActivityLogs.Single(); if (log.Status != ActivityStatus.Pending) step1Status = log.Children.Single(c => c.Name.StartsWith("Step 1:")).Status; step1Status.Dump(); } while (step1Status == ActivityStatus.Pending || step1Status == ActivityStatus.Running); task = repository.Tasks.Refresh(task); ``` # Create a project Source: https://octopus.com/docs/octopus-rest-api/examples/projects/create-project.md This script demonstrates how to programmatically create a project in Octopus Deploy. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - Name of the project group to add the project to - Name of the lifecycle to use for the project ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $projectName = "MyProject" $projectDescription = "MyDescription" $projectGroupName = "Default project group" $lifecycleName = "Default lifecycle" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project group $projectGroup = (Invoke-RestMethod -Method Get "$octopusURL/api/$($space.Id)/projectgroups/all" -Headers $header) | Where-Object {$_.Name -eq $projectGroupName} # Get Lifecycle $lifeCycle = (Invoke-RestMethod -Method Get "$octopusURL/api/$($space.Id)/lifecycles/all" -Headers $header) | Where-Object {$_.Name -eq $lifecycleName} # Create project json payload $jsonPayload = @{ Name = $projectName Description = $projectDescription ProjectGroupId = $projectGroup.Id LifeCycleId = $lifeCycle.Id } # Create project Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/projects" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $projectGroupName = "Default project group" $lifecycleName = "Default lifecycle" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project group $projectGroup = $repositoryForSpace.ProjectGroups.FindByName($projectGroupName) # Get lifecycle $lifecycle = $repositoryForSpace.Lifecycles.FindByName($lifecycleName) # Create new project $project = $repositoryForSpace.Projects.CreateOrModify($projectName, $projectGroup, $lifecycle) $project.Save() } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; string projectGroupName = "Default project group"; string lifecycleName = "Default lifecycle"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project group var projectGroup = repositoryForSpace.ProjectGroups.FindByName(projectGroupName); // Get lifecycle var lifecycle = repositoryForSpace.Lifecycles.FindByName(lifecycleName); // Create project var project = repositoryForSpace.Projects.CreateOrModify(projectName, projectGroup, lifecycle); project.Save(); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' project_name = 'Your new Project Name' project_description = 'My project created with python' project_group_name = 'Default Project Group' lifecycle_name = 'Default Lifecycle' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project_group = get_by_name('{0}/{1}/projectgroups/all'.format(octopus_server_uri, space['Id']), project_group_name) lifecycle = get_by_name('{0}/lifecycles/all'.format(octopus_server_uri, space['Id']), lifecycle_name) project = { 'Name': project_name, 'Description': project_description, 'ProjectGroupId': project_group['Id'], 'LifeCycleId': lifecycle['Id'] } uri = '{0}/{1}/projects'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=project) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" projectGroupName := "MyProjectGroup" lifeCycleName := "Default Lifecycle" // Get space space := GetSpace(apiURL, APIKey, spaceName) // Create client client := octopusAuth(apiURL, APIKey, space.ID) // Get project group projectGroup := GetProjectGroup(client, projectGroupName, 0) // Get lifecycle lifecycle := GetLifecycle(apiURL, APIKey, space, lifeCycleName, 0) // Create project project := CreateProject(client, lifecycle, projectGroup, projectName) fmt.Println("Created project " + project.ID) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProjectGroup(client *octopusdeploy.Client, projectGroupName string, skip int) *octopusdeploy.ProjectGroup { projectGroupsQuery := octopusdeploy.ProjectGroupsQuery { PartialName: projectGroupName, } projectGroups, err := client.ProjectGroups.Get(projectGroupsQuery) if err != nil { log.Println(err) } if len(projectGroups.Items) == projectGroups.ItemsPerPage { // call again projectGroup := GetProjectGroup(client, projectGroupName, (skip + len(projectGroups.Items))) if projectGroup != nil { return projectGroup } } else { // Loop through returned items for _, projectGroup := range projectGroups.Items { if projectGroup.Name == projectGroupName { return projectGroup } } } return nil } func GetLifecycle(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, LifecycleName string, skip int) *octopusdeploy.Lifecycle { client := octopusAuth(octopusURL, APIKey, space.ID) lifecycleQuery := octopusdeploy.LifecyclesQuery { PartialName: LifecycleName, } lifecycles, err := client.Lifecycles.Get(lifecycleQuery) if err != nil { log.Println(err) } if len(lifecycles.Items) == lifecycles.ItemsPerPage { // call again lifecycle := GetLifecycle(octopusURL, APIKey, space, LifecycleName, (skip + len(lifecycles.Items))) if lifecycle != nil { return lifecycle } } else { // Loop through returned items for _, lifecycle := range lifecycles.Items { if lifecycle.Name == LifecycleName { return lifecycle } } } return nil } func CreateProject(client *octopusdeploy.Client, lifecycle *octopusdeploy.Lifecycle, projectGroup *octopusdeploy.ProjectGroup, name string) *octopusdeploy.Project { project := octopusdeploy.NewProject(name, lifecycle.ID, projectGroup.ID) project, err := client.Projects.Add(project) if err != nil { log.Println(err) } return project } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Lifecycle; import com.octopus.sdk.domain.Project; import com.octopus.sdk.domain.ProjectGroup; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.project.ProjectResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class CreateProject { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final Optional lifecycle = space.get().lifecycles().getByName("TheLifecycleName"); if (!lifecycle.isPresent()) { System.out.println("No lifecycle named 'TheLifecycleName' exists on server"); return; } final Optional projGroup = space.get().projectGroups().getByName("TheProjectGroupName"); if (!projGroup.isPresent()) { System.out.println("No ProjectGroup named 'TheProjectGroupName' exists on server"); return; } final ProjectResource projectToCreate = new ProjectResource( "TheProjectName", lifecycle.get().getProperties().getId(), projGroup.get().getProperties().getId()); projectToCreate.setAutoCreateRelease(false); final Project createdProject = space.get().projects().create(projectToCreate); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Delete a project Source: https://octopus.com/docs/octopus-rest-api/examples/projects/delete-project.md This script demonstrates how to programmatically delete a project in Octopus Deploy. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project :::div{.warning} **This script will delete the project with the specified name. This operation is destructive and cannot be undone. Ensure you have a database backup and take care when running this script or one based on it** ::: ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $projectName = "MyProject" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Delete project Invoke-RestMethod -Method Delete -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)" -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Delete project $repositoryForSpace.Projects.Delete($project) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; string projectName = "MyProject"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Delete project repositoryForSpace.Projects.Delete(project); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' project_name = 'Your Project Name' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project = get_by_name('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id']), project_name) uri = '{0}/{1}/projects/{2}'.format(octopus_server_uri, space['Id'], project['Id']) response = requests.delete(uri, headers=headers) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // delete if not nil if nil != project { fmt.Println("Deleting project " + project.Name + " (" + project.ID + ")") client.Projects.DeleteByID(project.ID) } else { fmt.Println("Project " + projectName + " not found!") } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.api.ProjectApi; import com.octopus.sdk.domain.Project; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class DeleteProject { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final ProjectApi projectApi = space.get().projects(); final Optional projectToDelete = projectApi.getByName("TheProjectName"); if (!projectToDelete.isPresent()) { System.out.println("No project named 'TheProjectName' exists on server"); return; } projectApi.delete(projectToDelete.get().getProperties()); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Delete projects with no process Source: https://octopus.com/docs/octopus-rest-api/examples/projects/delete-projects-with-empty-processes.md This script demonstrates how to programmatically delete projects with no deployment process in Octopus Deploy. ## Usage - Octopus URL - Octopus API Key - Name of the space to use :::div{.warning} **This script will delete projects with no deployment process. This operation is destructive and cannot be undone. Ensure you have a database backup and take care when running this script or one based on it** ::: ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $projects = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header # Loop through projects foreach ($project in $projects) { # Get deployment process $deploymentProcess = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/deploymentprocesses/$($project.DeploymentProcessId)" -Headers $header # Check to see if there's a process if (($null -eq $deploymentProcess.Steps) -or ($deploymentProcess.Steps.Count -eq 0)) { # Delete project Invoke-RestMethod -Method Delete -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)" -Headers $header } } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $projects = $repositoryForSpace.Projects.GetAll() # Loop through projects foreach ($project in $projects) { # Get deployment process $deploymentProcess = $repositoryForSpace.DeploymentProcesses.Get($project.DeploymentProcessId) # Check for empty process if (($null -eq $deploymentProcess.Steps) -or ($deploymentProcess.Steps.Count -eq 0)) { # Delete project $repositoryForSpace.Projects.Delete($project) } } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var projects = repositoryForSpace.Projects.GetAll(); // Loop through project foreach (var project in projects) { // Get deployment process var deploymentProcess = repositoryForSpace.DeploymentProcesses.Get(project.DeploymentProcessId); // Check for empty process if ((deploymentProcess.Steps == null) || (deploymentProcess.Steps.Count == 0)) { // Delete project repositoryForSpace.Projects.Delete(project); } } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) projects = get_octopus_resource('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id'])) for project in projects: process = get_octopus_resource('{0}/{1}/deploymentprocesses/{2}'.format(octopus_server_uri, space['Id'], project['DeploymentProcessId'])) steps = process.get('Steps', None) if steps is None or len(steps) == 0: print('Deleting project \'{0}\' as it has no deployment process'.format(project['Name'])) uri = '{0}/{1}/projects/{2}'.format(octopus_server_uri, space['Id'], project['Id']) response = requests.delete(uri, headers=headers) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get all projects projects, err := client.Projects.GetAll() if err != nil { log.Println(err) } // Loop through projects for i := 0; i < len(projects); i++ { if !projects[i].IsVersionControlled { // Get deployment process deploymentProcess := GetDeploymentProcess(client, projects[i]) // Check for steps if deploymentProcess == nil || deploymentProcess.Steps == nil || len(deploymentProcess.Steps) == 0 { // Delete project fmt.Println("Deleting " + projects[i].Name) client.Projects.DeleteByID(projects[i].ID) } } else { fmt.Println(projects[i].Name + " is using version control, skipping") } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetDeploymentProcess(client *octopusdeploy.Client, project *octopusdeploy.Project) *octopusdeploy.DeploymentProcess { deploymentProcess, err := client.DeploymentProcesses.GetByID(project.DeploymentProcessID) if err != nil { log.Println(err) } return deploymentProcess } ```
# Disable project triggers Source: https://octopus.com/docs/octopus-rest-api/examples/projects/disable-project-triggers.md This script demonstrates how to programmatically disable triggers for a project in Octopus Deploy. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectName = "MyProject" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get project triggers $projectTriggers = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/triggers" -Headers $header # Loop through triggers foreach ($projectTrigger in $projectTriggers.Items) { # Disable the trigger $projectTrigger.IsDisabled = $true Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/projecttriggers/$($projectTrigger.Id)" -Body ($projectTrigger | ConvertTo-Json -Depth 10) -Headers $header } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "c:\octopus.client\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get project triggers $projectTriggers = $repositoryForSpace.Projects.GetAllTriggers($project) # Loop through triggers foreach ($projectTrigger in $projectTriggers) { # Disable trigger $projectTrigger.IsDisabled = $true $repositoryForSpace.ProjectTriggers.Modify($projectTrigger) | Out-Null } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; string projectName = "MyProject"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get project triggers var projectTriggers = repositoryForSpace.Projects.GetAllTriggers(project); foreach (var projectTrigger in projectTriggers) { // Disable trigger projectTrigger.IsDisabled = true; repositoryForSpace.ProjectTriggers.Modify(projectTrigger); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' project_name = 'Your project' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project = get_by_name('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id']), project_name) project_triggers = get_octopus_resource('{0}/{1}/projects/{2}/triggers'.format(octopus_server_uri, space['Id'], project['Id'])) for trigger in project_triggers['Items']: print('Disabling project trigger {0} ({1})'.format(trigger['Name'], trigger['Id'])) trigger['IsDisabled'] = True uri = '{0}/{1}/projecttriggers/{2}'.format(octopus_server_uri, space['Id'], trigger['Id']) response = requests.put(uri, headers=headers, json=trigger) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Get project triggers projectTriggers, err := client.ProjectTriggers.GetByProjectID(project.ID) if err != nil { log.Println(err) } // Loop through the project triggers for i := 0; i < len(projectTriggers); i++ { projectTriggers[i].IsDisabled = true var projectTrigger = *projectTriggers[i] client.ProjectTriggers.Update(projectTrigger) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
# Deactivate projects Source: https://octopus.com/docs/octopus-rest-api/examples/projects/enable-disable-project.md This script demonstrates how to programmatically enable or disable an Octopus [project](/docs/projects). ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - Boolean value for enabled ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectName = "MyProject" $projectEnabled = $False # Get space $spaces = Invoke-RestMethod -Uri "$octopusURL/api/spaces?partialName=$([uri]::EscapeDataString($spaceName))&skip=0&take=100" -Headers $header $space = $spaces.Items | Where-Object { $_.Name -eq $spaceName } # Get project $projects = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/projects?partialName=$([uri]::EscapeDataString($projectName))&skip=0&take=100" -Headers $header $project = $projects.Items | Where-Object { $_.Name -eq $projectName } # Enable/Disable project $project.IsDisabled = !$projectEnabled # Save project changes Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)" -Headers $header -Body ($project | ConvertTo-Json -Depth 10) ```
Python3 ```python import json import requests import sys # instantiate working variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' params = {'API-Key': octopus_api_key} space_name = 'Default' project_name = 'ProjectName' #Set disable_project to 'True' to disable | 'False' to enable. disable_project = True # Get Space spaces = requests.get("{url}/api/spaces?partialName={space}&skip=0&take=100".format(url=octopus_server_uri, space=space_name), params) spaces_data = json.loads(spaces.text) for item in spaces_data['Items']: if item['Name'] == space_name: space = item else: sys.exit("The Space with name {spaceName} cannot be found.".format(spaceName=space_name)) # Get Project projects = requests.get("{url}/api/{spaceID}/projects?partialName={projectName}&skip=0&take=100".format(url=octopus_server_uri, spaceID=space['Id'], projectName=project_name), params) projects_data = json.loads(projects.text) for item in projects_data['Items']: if item['Name'] == project_name: project = item else: sys.exit("Project with name {projectName} cannot be found.".format(projectName=project_name)) # Enable/Disable Project project['IsDisabled'] = disable_project # Save Changes uri = "{url}/api/{spaceID}/projects/{projectID}".format(url=octopus_server_uri, spaceID=space['Id'], projectID=project['Id']) change = requests.put(uri, headers={'X-Octopus-ApiKey': octopus_api_key}, data=json.dumps(project)) if change.status_code == 200: print("Request Successful.") else: print("Error - Request Code: {code}".format(code=change.status_code)) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" enabled := true // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Enable/disable project project.IsDisabled = !enabled client.Projects.Update(project) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
# Export projects Source: https://octopus.com/docs/octopus-rest-api/examples/projects/export-projects.md This script will export projects from an Octopus space that can be imported into a different space on the same instance or a separate Octopus instance. :::div{.hint} **Note:** Please note there are some items to consider before using this script: - This script uses an API endpoint introduced in **Octopus 2021.1** for the [Export/Import Projects feature](/docs/projects/export-import). Using this script in earlier versions of Octopus will not work. - Automating the export of projects as part of a backup/restore process is **not recommended**. See our [supported scenarios](/docs/projects/export-import/#scenarios) when using the API from this feature. ::: ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space where the projects to be exported can be found - A list of project names to be exported - A password to protect sensitive values in the exported data - Boolean whether or not to wait for the export task to finish - Timeout in seconds to wait before attempting to cancel the task. ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Provide the space name for the projects to export. $spaceName = "Default" # Provide a list of project names to export. $projectNames = @("Project A", "Project B") # Provide a password for the export zip file $exportTaskPassword = "" # Wait for the export task to finish? $exportTaskWaitForFinish = $True # Provide a timeout for the export task to be canceled. $exportTaskCancelInSeconds=300 $octopusURL = $octopusURL.TrimEnd('/') # Get Space $spaces = Invoke-RestMethod -Uri "$octopusURL/api/spaces?partialName=$([uri]::EscapeDataString($spaceName))&skip=0&take=100" -Headers $header $space = $spaces.Items | Where-Object { $_.Name -eq $spaceName } $exportTaskSpaceId = $space.Id $exportTaskProjectIds = @() if (![string]::IsNullOrWhiteSpace($projectNames)) { @(($projectNames -Split "`n").Trim()) | ForEach-Object { if (![string]::IsNullOrWhiteSpace($_)) { Write-Verbose "Working on: '$_'" $projectName = $_.Trim() if([string]::IsNullOrWhiteSpace($projectName)) { throw "Project name is empty'" } $projects = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/projects?partialName=$([uri]::EscapeDataString($projectName))&skip=0&take=100" -Headers $header $project = $projects.Items | Where-Object { $_.Name -eq $projectName } $exportTaskProjectIds += $project.Id } } } $exportBody = @{ IncludedProjectIds = $exportTaskProjectIds; Password = @{ HasValue = $True; NewValue = $exportTaskPassword; } } $exportBodyAsJson = $exportBody | ConvertTo-Json $exportBodyPostUrl = "$octopusURL/api/$($exportTaskSpaceId)/projects/import-export/export" Write-Host "Kicking off export run by posting to $exportBodyPostUrl" Write-Verbose "Payload: $exportBodyAsJson" $exportResponse = Invoke-RestMethod $exportBodyPostUrl -Method POST -Headers $header -Body $exportBodyAsJson $exportServerTaskId = $exportResponse.TaskId Write-Host "The task id of the new task is $exportServerTaskId" Write-Host "Export task was successfully invoked, you can access the task: $octopusURL/app#/$exportTaskSpaceId/tasks/$exportServerTaskId" if ($exportTaskWaitForFinish -eq $true) { Write-Host "The setting to wait for completion was set, waiting until task has finished" $startTime = Get-Date $currentTime = Get-Date $dateDifference = $currentTime - $startTime $taskStatusUrl = "$octopusURL/api/$exportTaskSpaceId/tasks/$exportServerTaskId" $numberOfWaits = 0 While ($dateDifference.TotalSeconds -lt $exportTaskCancelInSeconds) { Write-Host "Waiting 5 seconds to check status" Start-Sleep -Seconds 5 $taskStatusResponse = Invoke-RestMethod $taskStatusUrl -Headers $header $taskStatusResponseState = $taskStatusResponse.State if ($taskStatusResponseState -eq "Success") { Write-Host "The task has finished with a status of Success" $artifactsUrl= "$octopusURL/api/$exportTaskSpaceId/artifacts?regarding=$exportServerTaskId" Write-Host "Checking for artifacts from $artifactsUrl" $artifacts = Invoke-RestMethod $artifactsUrl -Method GET -Headers $header $exportArtifact = $artifacts.Items | Where-Object { $_.Filename -like "Octopus-Export-*.zip"} Write-Host "Export task successfully completed, you can download the export archive: $octopusURL$($exportArtifact.Links.Content)" exit 0 } elseif($taskStatusResponseState -eq "Failed" -or $taskStatusResponseState -eq "Canceled") { Write-Host "The task has finished with a status of $taskStatusResponseState status, completing" exit 1 } $numberOfWaits += 1 if ($numberOfWaits -ge 10) { Write-Host "The task state is currently $taskStatusResponseState" $numberOfWaits = 0 } else { Write-Host "The task state is currently $taskStatusResponseState" } $startTime = $taskStatusResponse.StartTime if ($null -eq $startTime -or [string]::IsNullOrWhiteSpace($startTime) -eq $true) { Write-Host "The task is still queued, let's wait a bit longer" $startTime = Get-Date } $startTime = [DateTime]$startTime $currentTime = Get-Date $dateDifference = $currentTime - $startTime } Write-Host "The cancel timeout has been reached, cancelling the export task" Invoke-RestMethod "$octopusURL/api/$exportTaskSpaceId/tasks/$exportTaskSpaceId/cancel" -Headers $header -Method Post | Out-Null Write-Host "Exiting with an error code of 1 because we reached the timeout" exit 1 } ```
Python3 ```python import json import requests from requests.api import get, head from datetime import datetime import time def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" project_names = ["MyProject"] export_task_password = "" export_task_wait_for_finish = True export_task_cancel_in_seconds = 300 # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Loop through projects export_task_project_ids = [] uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) for project_name in project_names: # Get project uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) project = next((x for x in projects if x['Name'] == project_name), None) if project != None: # Add Id export_task_project_ids.append(project['Id']) # Create export body export_body = { 'IncludedProjectIds': export_task_project_ids, 'Password': { 'HasValue': True, 'NewValue': export_task_password } } # Start export task uri = '{0}/api/{1}/projects/import-export/export'.format(octopus_server_uri, space['Id']) print ('Kicking off export run by posting to {0}'.format(uri)) response = requests.post(uri, headers=headers, json=export_body) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Get the task Id export_task_id = results['TaskId'] print ('The task id of the new task is {0}'.format(export_task_id)) print ('Export task was successfully invoked, you can access the task {0}/app#/{1}/tasks/{2}'.format(octopus_server_uri, space['Id'], export_task_id)) if export_task_wait_for_finish: print ('The setting to wait for completion was set, waiting until the task has finished') start_time = datetime.now() current_time = datetime.now() date_difference = current_time - start_time number_of_waits = 0 while date_difference.seconds < export_task_cancel_in_seconds: print ('Waiting 5 seconds') time.sleep(5) uri = '{0}/api/{1}/tasks/{2}'.format(octopus_server_uri, space['Id'], export_task_id) response = requests.get(uri, headers=headers) response.raise_for_status() results = json.loads(response.content.decode('utf-8')) if results['State'] == 'Success': print ('The task has finished successfully') uri = '{0}/api/{1}/artifacts?regarding={2}'.format(octopus_server_uri, space['Id'], export_task_id) print ('Checking for artifacts from {0}'.format(uri)) artifact = get_octopus_resource(uri, headers) print ('Export task successfully completed, you can download the export archive: {0}{1}'.format(octopus_server_uri, artifact[0]['Links']['Content'])) exit(0) elif results['State'] == 'Failed' or results['State'] == 'Cancelled': print ('The task finished with a status of {0}'.format(results['State'])) exit(1) number_of_waits += 1 if number_of_waits >= 10: print ('The task is currently {0}'.format(results['State'])) number_of_waits = 0 else: print ('The task is currently {0}'.format(results['State'])) if results['StartTime'] == None or results['StartTime'] == '': print ('The task is still queued, let us wait a bit longer') start_time = datetime.now() current_time = datetime.now() date_difference = current_time - start_time print ('The cancel timeout has been reached, cancelling the export task') uri = '{0}/api/{1}/tasks/{2}/cancel'.format(octopus_server_uri, space['Id'], export_task_id) response = requests.get(uri, headers=headers) response.raise_for_status() ```
Go ```go package main import ( "bytes" "encoding/json" "fmt" "io/ioutil" "log" "net/http" "net/url" "time" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type ExportProject struct { IncludedProjectIds []string Password *octopusdeploy.SensitiveValue } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectNames := []string{"MyProject"} exportPassword := "" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Loop through projects exportProjectIds := []string{} for i := 0; i < len(projectNames); i++ { // Get the project project := GetProject(apiURL, APIKey, space, projectNames[i]) if project != nil { exportProjectIds = append(exportProjectIds, project.ID) } } // Build body password := octopusdeploy.NewSensitiveValue(exportPassword) exportObject := ExportProject{ IncludedProjectIds: exportProjectIds, Password: password, } // Export the projects ExportProjects(apiURL, APIKey, space, exportObject, true, 300) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func ExportProjects(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, exportProjects ExportProject, waitForFinish bool, exportTaskCancelInSeconds int) { // Create http client httpClient := &http.Client{} exportTaskUrl := octopusURL.String() + "/api/" + space.ID + "/projects/import-export/export" // Make request jsonBody, err := json.Marshal(exportProjects) if err != nil { log.Println(err) } request, _ := http.NewRequest("POST", exportTaskUrl, bytes.NewBuffer(jsonBody)) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var serverTaskRaw interface{} jsonErr := json.Unmarshal(responseData, &serverTaskRaw) if jsonErr != nil { log.Println(err) } // Get the task id serverTask := serverTaskRaw.(map[string]interface{}) serverTaskId := serverTask["TaskId"] fmt.Println("The task id of the new task is: " + serverTaskId.(string)) fmt.Println("Export task was successfully invoked, you can access the task " + octopusURL.String() + "/app#/" + space.ID + "/tasks" + serverTaskId.(string)) if waitForFinish { fmt.Println("The setting to wait for completion was set, waiting until the task has finished") elapsedSeconds := 0 taskUrl := octopusURL.String() + "/api/" + space.ID + "/tasks/" + serverTaskId.(string) for elapsedSeconds < exportTaskCancelInSeconds { time.Sleep(5 * time.Second) elapsedSeconds += 5 request, _ := http.NewRequest("GET", taskUrl, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var serverTaskRaw interface{} jsonErr := json.Unmarshal(responseData, &serverTaskRaw) if jsonErr != nil { log.Println(err) } serverTask = serverTaskRaw.(map[string]interface{}) taskStatus := serverTask["State"] if taskStatus.(string) == "Success" { fmt.Println("The task has finished successfully") artifactUrl := octopusURL.String() + "/api/" + space.ID + "/artifacts?regarding=" + serverTaskId.(string) fmt.Println("Checking for artifacts from " + artifactUrl) request, _ := http.NewRequest("GET", artifactUrl, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var artifactRaw interface{} jsonErr := json.Unmarshal(responseData, &artifactRaw) if jsonErr != nil { log.Println(err) } artifactsRaw := artifactRaw.(map[string]interface{}) returnedItems := artifactsRaw["Items"].([]interface{})[0].(map[string]interface{}) fmt.Println("Export task successfully completed, you can download the export archive: " + octopusURL.String() + returnedItems["Links"].(map[string]interface{})["Content"].(string)) break } else if taskStatus.(string) == "Failed" || taskStatus.(string) == "Cancelled" { fmt.Println("The task finished with a status of " + taskStatus.(string)) break } } if elapsedSeconds >= exportTaskCancelInSeconds { cancelUrl := octopusURL.String() + "/api/" + space.ID + "/tasks/" + serverTaskId.(string) + "/cancel" request, _ := http.NewRequest("GET", cancelUrl, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } fmt.Println(response) } } } ```
# Find unused projects Source: https://octopus.com/docs/octopus-rest-api/examples/projects/find-unused-projects.md This script will search for projects who haven't had a release created in the previous set number of days. Please note, this script will exclude projects: - Without _any_ releases. - Projects already disabled. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Disable Old Projects - indicates if the projects should be set to disabled, default is $false - Days Since Last Release - the number of days to allow before considering the project is inactive, default is 90 ## Script
PowerShell (REST API) ```powershell [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 $octopusUrl = "https://your-octopus-url" ## Octopus URL to look at $octopusApiKey = "API-YOUR-KEY" ## API key of user who has permissions to view all spaces, cancel tasks, and resubmit runbooks runs and deployments $disableOldProjects = $false ## Tells the script to disable the projects that are older than the days since last release $daysSinceLastRelease = 90 ## The number of days since the last release to be considered unused. Any project without a release created in [90] days is considered inactive. $cachedResults = @{} function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item, $ignoreCache ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-Verbose $body Write-Host "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } if (($null -eq $ignoreCache -or $ignoreCache -eq $false) -and $method.ToUpper().Trim() -eq "GET") { Write-Verbose "Checking to see if $url is already in the cache" if ($cachedResults.ContainsKey($url) -eq $true) { Write-Verbose "$url is already in the cache, returning the result" return $cachedResults[$url] } } else { Write-Verbose "Ignoring cache." } Write-Verbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' if ($cachedResults.ContainsKey($url) -eq $true) { $cachedResults.Remove($url) } Write-Verbose "Adding $url to the cache" $cachedResults.add($url, $result) return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-Error "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-Error "Forbidden error returned from $url, please verify API key and try again" } else { Write-Verbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-Verbose $_.Exception } } Throw $_.Exception } $spaceList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $octopusApiKey -endPoint "spaces?skip=0&take=1000" -spaceId $null -method "GET" $currentUtcTime = $(Get-Date).ToUniversalTime() $oldProjectList = @() foreach ($space in $spaceList.Items) { $projectList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $octopusApiKey -endPoint "projects?skip=0&take=10000" -spaceId $space.Id -method "GET" foreach ($project in $projectList.Items) { if ($project.IsDisabled -eq $true) { Write-Verbose "Project $($project.Name) is already disabled." continue } $releaseList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $octopusApiKey -endPoint "projects/$($project.Id)/releases" -spaceId $space.Id -method "GET" if ($releaseList.Items.Count -le 0) { Write-Verbose "No releases found for $($project.Name)." continue } $assembledDate = [datetime]::Parse($releaseList.Items[0].Assembled) $assembledDate = $assembledDate.ToUniversalTime() $dateDiff = $currentUtcTime - $assembledDate if ($dateDiff.TotalDays -gt $daysSinceLastRelease) { $oldProjectList += "$($project.Name) - $($space.Name) last release was $($dateDiff.TotalDays) days ago." if ($disableOldProjects -eq $true) { $project.IsDisabled = $true $updatedProject = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $octopusApiKey -endPoint "projects/$($project.Id)" -spaceId $space.Id -method "PUT" -Item $project Write-Host "Set the project $($updatedProject.Name) to disabled." } } } } Write-Host "The following projects were found to have no releases created in at least $daysSinceLastRelease days." foreach ($project in $oldProjectList) { Write-Host " $project" } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $daysSinceLastRelease = 90 $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) $currentUtcTime = $(Get-Date).ToUniversalTime() $oldProjects = @() # Loop through spaces foreach ($space in $repository.Spaces.GetAll()) { # Get space $space = $repository.Spaces.FindByName($space.Name) $repositoryForSpace = $client.ForSpace($space) # Get all projects in space $projects = $repositoryForSpace.Projects.GetAll() # Loop through projects foreach ($project in $projects) { # Check for disabled if ($project.IsDisabled) { Write-Host "$($project.Name) is disabled." continue } # Get project releases $releases = $repositoryForSpace.Projects.GetReleases($project) if ($releases.Items.Count -eq 0) { Write-Host "No releases found for $($project.Name)" continue } $assembledDate = [datetime]::Parse($releases.Items[0].Assembled) $assembledDate = $assembledDate.ToUniversalTime() $dateDiff = $currentUtcTime - $assembledDate # Check the length of time if ($dateDiff.TotalDays -gt $daysSinceLastRelease) { $oldProjects += "$($project.Name) - $($space.Name) last release was $($dateDiff.TotalDays) days ago." } } } Write-Host "The following projects were found to have no releases created in at least $daysSinceLastRelease days" foreach ($project in $oldProjects) { Write-Host "`t$project" } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; DateTime currentUtcTime = DateTime.Now.ToUniversalTime(); System.Collections.Generic.List oldProjects = new System.Collections.Generic.List(); int daysSinceLastRelease = 90; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Loop through all spaces foreach (var octopusSpace in repository.Spaces.FindAll()) { // Get space repository var space = repository.Spaces.FindByName(octopusSpace.Name); var repositoryForSpace = client.ForSpace(space); // Get all projects var projects = repositoryForSpace.Projects.GetAll(); // Loop through projects foreach (var project in projects) { if(project.IsDisabled) { Console.WriteLine(string.Format("{0} is disabled", project.Name)); continue; } // Get releases for project var releases = repositoryForSpace.Projects.GetAllReleases(project); // Check to see if anything has ever been created if (releases.Count == 0) { Console.WriteLine(string.Format("No releases found for {0}", project.Name)); continue; } var assembledDate = releases[0].Assembled.ToUniversalTime(); var dateDiff = currentUtcTime - assembledDate; // Check to see how many days it has been if (dateDiff.TotalDays > daysSinceLastRelease) { oldProjects.Add(string.Format("{0} - {1} last release was {2} days ago.", project.Name, space.Name, dateDiff.TotalDays.ToString())); } } } Console.WriteLine(string.Format("The following projects were found to have no releases created in at least {0} days", daysSinceLastRelease)); foreach(var project in oldProjects) { Console.WriteLine(string.Format("\t {0}", project)); } ```
Python3 ```python import json import requests from requests.api import get, head import datetime from dateutil.parser import parse def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} old_projects = [] current_date = datetime.datetime.utcnow() days_since_last_release = 90 # Get spaces uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) # Loop through spaces for space in spaces: # Get all projects uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) # Loop through projects for project in projects: # Check to see if it's disabled if project['IsDisabled']: print('{0} is disabled', project['Name']) continue # Get releases uri = '{0}/api/{1}/projects/{2}/releases'.format(octopus_server_uri, space['Id'], project['Id']) releases = get_octopus_resource(uri, headers) # Check to see if any exist if len(releases) == 0: print('No releases found for {0}'.format(project['Name'])) continue # Get the assembled date assembled_date = parse(releases[0]['Assembled']) assembled_date = assembled_date.replace(tzinfo=None) # Calculate the difference date_diff = current_date - assembled_date if date_diff.days > days_since_last_release: old_projects.append('{0} - {1} last release as {2} days ago'.format(project['Name'], space['Name'], date_diff.days)) print('The following projects were found to have no releases created in the last {0} days'.format(days_since_last_release)) for project in old_projects: print('\t{0}'.format(project)) ```
Go ```go package main import ( "fmt" "log" "net/url" "time" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" // Get client client := octopusAuth(apiURL, APIKey, "") // Get current date currentDate := time.Now() daysSinceLastRelease := 90 oldProjects := []string{} // Get all spaces spaces, err := client.Spaces.GetAll() if err != nil { log.Println(err) } // Loop through spaces for _, space := range spaces { spaceClient := octopusAuth(apiURL, APIKey, space.ID) // Get all projects for space projects, err := spaceClient.Projects.GetAll() if err != nil { log.Println(err) } // Loop through projects in space for _, project := range projects { // Check to see if it is disabled if project.IsDisabled { fmt.Printf("%[1]s is disabled \n", project.Name) continue } // Get all releases projectReleases, err := spaceClient.Projects.GetReleases(project) if err != nil { log.Println(err) } if len(projectReleases) == 0 { fmt.Printf("No releases found for %[1]s \n", project.Name) continue } // Get assembled date of most recent release assembledDate := projectReleases[0].Assembled // Calculate difference dateDiff := currentDate.Sub(assembledDate).Hours() / 24 strDateDiff := fmt.Sprintf("%f", dateDiff) // Check the difference if dateDiff > float64(daysSinceLastRelease) { oldProjects = append(oldProjects, (project.Name + " - " + space.Name + " last release was " + strDateDiff + " days ago.")) } } } strDaysSinceLastRelease := fmt.Sprintf("%f", daysSinceLastRelease) fmt.Printf("The following projects were found to have no releases created in at least %[1]s days \n", strDaysSinceLastRelease) for _, project := range oldProjects { fmt.Println("\t" + project) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetUserRole(client *octopusdeploy.Client, userRoleName string) *octopusdeploy.UserRole { // Get all roles userRoles, err := client.UserRoles.GetAll() if err != nil { log.Println(err) } for _, userRole := range userRoles { if userRole.Name == userRoleName { return userRole } } return nil } ```
# Import projects Source: https://octopus.com/docs/octopus-rest-api/examples/projects/import-projects.md This script demonstrates how you can import projects into an Octopus space. It uses a previously executed export task from another space as the source for the import. :::div{.hint} **Note:** Please note there are some items to consider before using this script: - This script uses an API endpoint introduced in **Octopus 2021.1** for the [Export/Import Projects feature](/docs/projects/export-import). Using this script in earlier versions of Octopus will not work. - Automating the import of projects as part of a backup/restore process is **not recommended**. See our [supported scenarios](/docs/projects/export-import/#scenarios) when using the API from this feature. ::: ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space where the projects were exported from - Name of the space where the projects are to be exported into - The export Server Task Id to use as the source for import e.g. `ServerTasks-12345` - The password used to protect sensitive values in the exported data - Boolean whether or not to wait for the import task to finish - Timeout in seconds to wait before attempting to cancel the task. ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Provide the space name where the export task ran. $sourceSpaceName = "Export-Space" # Provide the space name for the projects to be imported into. $destinationSpaceName = "Default" # Provide the Export Server task id to use as the source for import e.g. ServerTasks-12345 $exportTaskId = "" # Provide a password for the import zip file $importTaskPassword = "" # Wait for the import task to finish? $importTaskWaitForFinish = $True # Provide a timeout for the imports task to be canceled. $importTaskCancelInSeconds=300 $octopusURL = $octopusURL.TrimEnd('/') # Get Source Space $spaces = Invoke-RestMethod -Uri "$octopusURL/api/spaces?partialName=$([uri]::EscapeDataString($sourceSpaceName))&skip=0&take=100" -Headers $header $space = $spaces.Items | Where-Object { $_.Name -eq $sourceSpaceName } $exportTaskSpaceId = $space.Id # Get Destination Space $spaces = Invoke-RestMethod -Uri "$octopusURL/api/spaces?partialName=$([uri]::EscapeDataString($destinationSpaceName))&skip=0&take=100" -Headers $header $space = $spaces.Items | Where-Object { $_.Name -eq $destinationSpaceName } $importTaskSpaceId = $space.Id $importBody = @{ ImportSource = @{ Type = "space"; SpaceId = $exportTaskSpaceId; TaskId = $exportTaskId; }; Password = @{ HasValue = $True; NewValue = $importTaskPassword; }; } $importBodyAsJson = $importBody | ConvertTo-Json $importBodyPostUrl = "$octopusURL/api/$($importTaskSpaceId)/projects/import-export/import" Write-Host "Kicking off import run by posting to $importBodyPostUrl" Write-Verbose "Payload: $importBodyAsJson" $importResponse = Invoke-RestMethod $importBodyPostUrl -Method POST -Headers $header -Body $importBodyAsJson $importServerTaskId = $importResponse.TaskId Write-Host "The task id of the new task is $importServerTaskId" Write-Host "Import task was successfully invoked, you can access the task: $octopusURL/app#/$importTaskSpaceId/tasks/$importServerTaskId" if ($importTaskWaitForFinish -eq $true) { Write-Host "The setting to wait for completion was set, waiting until task has finished" $startTime = Get-Date $currentTime = Get-Date $dateDifference = $currentTime - $startTime $taskStatusUrl = "$octopusURL/api/$importTaskSpaceId/tasks/$importServerTaskId" $numberOfWaits = 0 While ($dateDifference.TotalSeconds -lt $importTaskCancelInSeconds) { Write-Host "Waiting 5 seconds to check status" Start-Sleep -Seconds 5 $taskStatusResponse = Invoke-RestMethod $taskStatusUrl -Headers $header $taskStatusResponseState = $taskStatusResponse.State if ($taskStatusResponseState -eq "Success") { Write-Host "The task has finished with a status of Success" exit 0 } elseif($taskStatusResponseState -eq "Failed" -or $taskStatusResponseState -eq "Canceled") { Write-Host "The task has finished with a status of $taskStatusResponseState status, completing" exit 1 } $numberOfWaits += 1 if ($numberOfWaits -ge 10) { Write-Host "The task state is currently $taskStatusResponseState" $numberOfWaits = 0 } else { Write-Host "The task state is currently $taskStatusResponseState" } $startTime = $taskStatusResponse.StartTime if ($null -eq $startTime -or [string]::IsNullOrWhiteSpace($startTime) -eq $true) { Write-Host "The task is still queued, let's wait a bit longer" $startTime = Get-Date } $startTime = [DateTime]$startTime $currentTime = Get-Date $dateDifference = $currentTime - $startTime } Write-Host "The cancel timeout has been reached, cancelling the import task" Invoke-RestMethod "$octopusURL/api/$importTaskSpaceId/tasks/$importTaskSpaceId/cancel" -Headers $header -Method Post | Out-Null Write-Host "Exiting with an error code of 1 because we reached the timeout" exit 1 } ```
Python3 ```python import json import requests from requests.api import get, head from datetime import datetime import time def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} sourceSpaceName = "Default" destinationSpaceName = "DestinationSpace" exportTaskId = "ServerTasks-XXXX" # from the export operation importTaskPassword = "MyFantasticPassword" importTaskWaitForFinish = True importTaskCancelInSeconds = 300 # Get destination space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) destinationSpace = next((x for x in spaces if x['Name'] == destinationSpaceName), None) # Get source space sourceSpace = next((x for x in spaces if x['Name'] == sourceSpaceName), None) # Define body of request importBody = { 'ImportSource': { 'Type': 'space', # must be lower case 'SpaceId': sourceSpace['Id'], 'TaskId' : exportTaskId }, 'Password': { 'HasValue': True, 'NewValue': importTaskPassword } } # Execute transfer uri = '{0}/api/{1}/projects/import-export/import'.format(octopus_server_uri, destinationSpace['Id']) print ('Kicking off import from {0} to {1}'.format(sourceSpaceName, destinationSpaceName)) response = requests.post(uri, headers=headers, json=importBody) response.raise_for_status() # Get results results = json.loads(response.content.decode('utf-8')) importTaskId = results['TaskId'] print ('The task id of the new task is: {0}'.format(importTaskId)) if importTaskWaitForFinish: start_time = datetime.now() current_time = datetime.now() date_difference = current_time - start_time number_of_waits = 0 while date_difference.seconds < importTaskCancelInSeconds: print ('Waiting 5 seconds') time.sleep(5) uri = '{0}/api/{1}/tasks/{2}'.format(octopus_server_uri, destinationSpace['Id'], importTaskId) response = requests.get(uri, headers=headers) response.raise_for_status() results = json.loads(response.content.decode('utf-8')) if results['State'] == 'Success': print ('The task has finished successfully') exit(0) elif results['State'] == 'Failed' or results['State'] == 'Cancelled': print ('The task finished with a status of {0}'.format(results['State'])) exit(1) number_of_waits += 1 if number_of_waits >= 10: print ('The task is currently {0}'.format(results['State'])) number_of_waits = 0 else: print ('The task is currently {0}'.format(results['State'])) if results['StartTime'] == None or results['StartTime'] == '': print ('The task is still queued, let us wait a bit longer') start_time = datetime.now() current_time = datetime.now() date_difference = current_time - start_time print ('The cancel timeout has been reached, cancelling the export task') uri = '{0}/api/{1}/tasks/{2}'.format(octopus_server_uri, destinationSpace['Id'], importTaskId) response = requests.get(uri, headers=headers) response.raise_for_status() ```
Go ```go package main import ( "bytes" "encoding/json" "fmt" "io/ioutil" "log" "net/http" "net/url" "time" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type ImportProject struct { ImportSource ImportSource Password *octopusdeploy.SensitiveValue } type ImportSource struct { Type string SpaceId string TaskId string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" destinationSpaceName := "Destination Space" exportPassword := "MyFantasticPassword" exportTaskId := "ServerTasks-XXXXX" // Get reference to space destinationSpace := GetSpace(apiURL, APIKey, destinationSpaceName) // Build body password := octopusdeploy.NewSensitiveValue(exportPassword) importObject := ImportProject{} importObject.ImportSource.SpaceId = destinationSpace.ID importObject.ImportSource.TaskId = exportTaskId importObject.ImportSource.Type = "space" importObject.Password = password // Export the projects ImportProjects(apiURL, APIKey, destinationSpace, importObject, true, 300) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func ImportProjects(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, importProjects ImportProject, waitForFinish bool, taskCancelInSeconds int) { // Create http client httpClient := &http.Client{} exportTaskUrl := octopusURL.String() + "/api/" + space.ID + "/projects/import-export/import" // Make request jsonBody, err := json.Marshal(importProjects) myString := string(jsonBody) fmt.Println(myString) if err != nil { log.Println(err) } request, _ := http.NewRequest("POST", exportTaskUrl, bytes.NewBuffer(jsonBody)) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var serverTaskRaw interface{} jsonErr := json.Unmarshal(responseData, &serverTaskRaw) if jsonErr != nil { log.Println(err) } // Get the task id serverTask := serverTaskRaw.(map[string]interface{}) serverTaskId := serverTask["TaskId"] fmt.Println("The task id of the new task is: " + serverTaskId.(string)) if waitForFinish { fmt.Println("The setting to wait for completion was set, waiting until the task has finished") elapsedSeconds := 0 taskUrl := octopusURL.String() + "/api/" + space.ID + "/tasks/" + serverTaskId.(string) for elapsedSeconds < taskCancelInSeconds { time.Sleep(5 * time.Second) elapsedSeconds += 5 request, _ := http.NewRequest("GET", taskUrl, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var serverTaskRaw interface{} jsonErr := json.Unmarshal(responseData, &serverTaskRaw) if jsonErr != nil { log.Println(err) } serverTask = serverTaskRaw.(map[string]interface{}) taskStatus := serverTask["State"] if taskStatus.(string) == "Success" { fmt.Println("The task has finished successfully") break } else if taskStatus.(string) == "Failed" || taskStatus.(string) == "Cancelled" { fmt.Println("The task finished with a status of " + taskStatus.(string)) break } } if elapsedSeconds >= taskCancelInSeconds { cancelUrl := octopusURL.String() + "/api/" + space.ID + "/tasks/" + serverTaskId.(string) + "/cancel" request, _ := http.NewRequest("GET", cancelUrl, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } fmt.Println(response) } } } ```
# Create a release with specific version Source: https://octopus.com/docs/octopus-rest-api/examples/releases/create-release-with-specific-version.md This script demonstrates how to programmatically create a release with a specified version number. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - Name of the channel - Version number of the release to create ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $projectName = "MyProject" $releaseVersion = "1.0.0.0" $channelName = "Default" $spaceName = "default" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get channel $channel = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/channels" -Headers $header).Items | Where-Object {$_.Name -eq $channelName} # Create release payload $releaseBody = @{ ChannelId = $channel.Id ProjectId = $project.Id Version = $releaseVersion SelectedPackages = @() } # Get deployment process template $template = Invoke-RestMethod -Uri "$octopusURL/api/$($space.id)/deploymentprocesses/deploymentprocess-$($project.id)/template?channel=$($channel.Id)" -Headers $header # Loop through the deployment process packages and add to release payload $template.Packages | ForEach-Object { $uri = "$octopusURL/api/$($space.id)/feeds/$($_.FeedId)/packages/versions?packageId=$($_.PackageId)&take=1" $version = Invoke-RestMethod -Uri $uri -Method GET -Headers $header $version = $version.Items[0].Version $releaseBody.SelectedPackages += @{ ActionName = $_.ActionName PackageReferenceName = $_.PackageReferenceName Version = $version } } # Create the release $release = Invoke-RestMethod -Uri "$octopusURL/api/$($space.id)/releases" -Method POST -Headers $header -Body ($releaseBody | ConvertTo-Json -depth 10) # Display created release $release ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $channelName = "default" $releaseVersion = "1.0.0.0" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space+repo $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get channel $channel = $repositoryForSpace.Channels.FindOne({param($c) $c.Name -eq $channelName -and $c.ProjectId -eq $project.Id}) # Create a new release resource $release = New-Object Octopus.Client.Model.ReleaseResource $release.ChannelId = $channel.Id $release.ProjectId = $project.Id $release.Version = $releaseVersion $release.SelectedPackages = New-Object 'System.Collections.Generic.List[Octopus.Client.Model.SelectedPackage]' # Get deployment process $deploymentProcess = $repositoryForSpace.DeploymentProcesses.Get($project.DeploymentProcessId) # Get template $template = $repositoryForSpace.DeploymentProcesses.GetTemplate($deploymentProcess, $channel) # Loop through the deployment process packages and add to release payload $template.Packages | ForEach-Object { # Get feed $feed = $repositoryForSpace.Feeds.Get($package.FeedId) $packageIds = @($package.PackageId) $version = ($repositoryForSpace.Feeds.GetVersions($feed,$packageIds) | Select-Object -First 1).Version $selectedPackage = New-Object Octopus.Client.Model.SelectedPackage $selectedPackage.ActionName = $_.ActionName $selectedPackage.PackageReferenceName = $_.PackageReferenceName $selectedPackage.Version = $version # Add to release $release.SelectedPackages.Add($selectedPackage) } # Create the release $releaseCreated = $repositoryForSpace.Releases.Create($release, $false) # Display created release $releaseCreated } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; string projectName = "MyProject"; string channelName = "Default"; string releaseVersion = "1.0.0.3"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space+repo var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get channel var channel = repositoryForSpace.Channels.FindOne(r => r.ProjectId == project.Id && r.Name == channelName); // Create release object Octopus.Client.Model.ReleaseResource release = new ReleaseResource(); release.ChannelId = channel.Id; release.ProjectId = project.Id; release.Version = releaseVersion; release.SelectedPackages = new List(); // Get deployment process var deploymentProcess = repositoryForSpace.DeploymentProcesses.Get(project.DeploymentProcessId); // Get template var template = repositoryForSpace.DeploymentProcesses.GetTemplate(deploymentProcess, channel); // Loop through the deployment process packages and add to release payload foreach (var package in template.Packages) { // Get feed var feed = repositoryForSpace.Feeds.Get(package.FeedId); var packageVersion = repositoryForSpace.Feeds.GetVersions(feed, new[] { package.PackageId }).First().Version; // Create selected package object Octopus.Client.Model.SelectedPackage selectedPackage = new SelectedPackage(); selectedPackage.ActionName = package.ActionName; selectedPackage.PackageReferenceName = package.PackageReferenceName; selectedPackage.Version = packageVersion; // Add to release release.SelectedPackages.Add(selectedPackage); } // Create release var releaseCreated = repositoryForSpace.Releases.Create(release, false); Console.WriteLine("Created release with version: {0}", releaseCreated.Version); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' project_name = 'MyProject' environment_name = 'Development' channel_name = 'Default' release_version = '1.0.0.0' # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get project uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) project = next((x for x in projects if x['Name'] == project_name), None) # Get channel uri = '{0}/api/{1}/projects/{2}/channels'.format(octopus_server_uri, space['Id'], project['Id']) channels = get_octopus_resource(uri, headers) channel = next((x for x in channels if x['Name'] == channel_name), None) # Get environment uri = '{0}/api/{1}/environments'.format(octopus_server_uri, space['Id']) environments = get_octopus_resource(uri, headers) environment = next((x for x in environments if x['Name'] == environment_name), None) # Get project template uri = '{0}/api/{1}/deploymentprocesses/deploymentprocess-{2}/template?channel={3}'.format(octopus_server_uri, space['Id'], project['Id'], channel['Id']) template = get_octopus_resource(uri, headers) # Create release JSON releaseJson = { 'ChannelId': channel['Id'], 'ProjectId': project['Id'], 'Version': release_version, 'SelectedPackages': [] } # Select packages for process for package in template['Packages']: uri = '{0}/api/{1}/feeds/{2}/packages/versions?packageId={3}&take=1'.format(octopus_server_uri, space['Id'], package['FeedId'], package['PackageId']) selectedPackage = get_octopus_resource(uri, headers)[0] # Only one result is returned so using index 0 selectedPackageJson = { 'ActionName': package['ActionName'], 'PackageReferenceName': package['PackageReferenceName'], 'Version': selectedPackage['Version'] } releaseJson['SelectedPackages'].append(selectedPackageJson) # Create release uri = '{0}/api/{1}/releases'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=releaseJson) response.raise_for_status() # Get results of API call release = json.loads(response.content.decode('utf-8')) ```
Go ```go package main import ( "encoding/json" "fmt" "log" "net/http" "net/url" "io/ioutil" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" channelName := "Default" environmentName := "Development" projectName := "MyProject" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Get channel channel := GetChannel(client, project, channelName) // Get template template := GetDeploymentProcessTemplate(apiURL, APIKey, space, project, channel) // Get environment environment := GetEnvironment(apiURL, APIKey, space, environmentName) releaseVersion := "" // Check to see if the next version increment property is nil if nil == template["NextVersionIncrement"] { // Project uses a package instead of a template, get the latest version of the package deploymentProcess, err := client.DeploymentProcesses.GetByID(project.DeploymentProcessID) if err != nil { log.Println(err) } versionNumberFound := false for i := 0; i < len(deploymentProcess.Steps); i++ { if deploymentProcess.Steps[i].Name == template["VersioningPackageStepName"].(string) { for j := 0; j < len(deploymentProcess.Steps[i].Actions); j++ { releasePackage := deploymentProcess.Steps[i].Actions[j].Packages[0] releaseVersion = GetPackageVersion(apiURL, APIKey, space, releasePackage.FeedID, releasePackage.PackageID) versionNumberFound = true break } } if versionNumberFound { break } } } else { releaseVersion = template["NextVersionIncrement"].(string) } // Create new release object release := octopusdeploy.NewRelease(channel.ID, project.ID, releaseVersion) // Get packages for release packages := template["Packages"].([]interface{}) for i := 0; i < len(packages); i++ { // Get selected package map packageMap := packages[i].(map[string]interface{}) version := GetPackageVersion(apiURL, APIKey, space, packageMap["FeedId"].(string), packageMap["PackageId"].(string)) // create selected package object selectedPackage := octopusdeploy.SelectedPackage{ ActionName: packageMap["ActionName"].(string), PackageReferenceName: packageMap["PackageReferenceName"].(string), Version: version, } // add selected package release.SelectedPackages = append(release.SelectedPackages, &selectedPackage) } // Create release release, err = client.Releases.Add(release) if err != nil { log.Println(err) } fmt.Println("Created release version: " + release.Version) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func GetEnvironment(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, environmentName string) *octopusdeploy.Environment { // Get client for space client := octopusAuth(octopusURL, APIKey, space.ID) // Get environment environmentsQuery := octopusdeploy.EnvironmentsQuery { Name: environmentName, } environments, err := client.Environments.Get(environmentsQuery) if err != nil { log.Println(err) } // Loop through results for _, environment := range environments.Items { if environment.Name == environmentName { return environment } } return nil } func GetChannel(client *octopusdeploy.Client, project *octopusdeploy.Project, ChannelName string) *octopusdeploy.Channel { channelQuery := octopusdeploy.ChannelsQuery{ PartialName: ChannelName, Skip: 0, } results := []*octopusdeploy.Channel{} for true { // Call for results channels, err := client.Channels.Get(channelQuery) if err != nil { log.Println(err) } // Check returned number of items if len(channels.Items) == 0 { break } // append items to results results = append(results, channels.Items...) // Update query channelQuery.Skip += len(channels.Items) } for i := 0; i < len(results); i++ { if results[i].ProjectID == project.ID && results[i].Name == ChannelName { return results[i] } } return nil } func GetDeploymentProcessTemplate(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, project *octopusdeploy.Project, channel *octopusdeploy.Channel) map[string]interface{} { // Query api for tasks templateApi := octopusURL.String() + "/api/" + space.ID + "/deploymentprocesses/deploymentprocess-" + project.ID + "/template?channel=" + channel.ID // Create http client httpClient := &http.Client{} // perform request request, _ := http.NewRequest("GET", templateApi, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var f interface{} jsonErr := json.Unmarshal(responseData, &f) if jsonErr != nil { log.Println(err) } template := f.(map[string]interface{}) // return the template return template } func GetPackageVersion(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, feedId string, packageId string) string { packageApi := octopusURL.String() + "/api/" + space.ID + "/feeds/" + feedId + "/packages/versions?packageId=" + packageId // Create http client httpClient := &http.Client{} // perform request request, _ := http.NewRequest("GET", packageApi, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var f interface{} jsonErr := json.Unmarshal(responseData, &f) if jsonErr != nil { log.Println(err) } // Map the returned data packageItems := f.(map[string]interface{}) // Returns the list of items, translate it to a map returnedItems := packageItems["Items"].([]interface{}) // We only need the most recent version mostRecentPackageVersion := returnedItems[0].(map[string]interface{}) return mostRecentPackageVersion["Version"].(string) } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Project; import com.octopus.sdk.domain.Release; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.release.ReleaseResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class CreateReleaseWithVersion { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final Optional project = space.get().projects().getByName("TheProjectName"); if (!project.isPresent()) { System.out.println("No project named 'TheProjectName' exists on server"); return; } final ReleaseResource releaseResource = new ReleaseResource("1.0", project.get().getProperties().getId()); final Release createdRelease = space.get().releases().create(releaseResource); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Delete project releases Source: https://octopus.com/docs/octopus-rest-api/examples/releases/delete-project-releases.md This script demonstrates how to programmatically delete releases for a project. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project :::div{.warning} **This script will delete all releases for a given project. This operation is destructive and cannot be undone. Ensure you have a database backup and take care when running this script or one based on it** ::: ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $projectName = "MyProject" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get releases for project $releases = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/releases" -Headers $header # Loop through list foreach ($release in $releases.Items) { # Delete release Invoke-RestMethod -Method Delete -Uri "$octopusURL/api/$($space.Id)/releases/$($release.Id)" -Headers $header } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get releases $releases = $repositoryForSpace.Releases.FindMany({param($r) $r.ProjectId -eq $project.Id}) # Loop through results foreach ($release in $releases) { # Delete release $repositoryForSpace.Releases.Delete($release) } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get releases var releases = repositoryForSpace.Releases.FindMany(p => p.ProjectId == project.Id); // Loop through results foreach (var release in releases) { // Delete release repositoryForSpace.Releases.Delete(release); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" project_name = "MyProject" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get project uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) project = next((x for x in projects if x['Name'] == project_name), None) # Get project releases uri = '{0}/api/{1}/projects/{2}/releases'.format(octopus_server_uri, space['Id'], project['Id']) releases = get_octopus_resource(uri, headers) # Delete releases for release in releases: uri = '{0}/api/{1}/releases/{2}'.format(octopus_server_uri, space['Id'], release['Id']) response = requests.delete(uri, headers=headers) response.raise_for_status() ```
Go ```go package main import ( "encoding/json" "fmt" "io/ioutil" "log" "net/http" "net/url" "strconv" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Get project releases projectReleases := GetProjectReleases(apiURL, APIKey, space, project) // Loop through releases for i := 0; i < len(projectReleases); i++ { projectRelease := projectReleases[i].(map[string]interface{}) // Delete release fmt.Println("Deleting " + projectRelease["Id"].(string)) client.Releases.DeleteByID(projectRelease["Id"].(string)) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func GetProjectReleases(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, project *octopusdeploy.Project) []interface{} { // Define api endpoint projectReleasesEndpoint := octopusURL.String() + "/api/" + space.ID + "/projects/" + project.ID + "/releases" // Create http client httpClient := &http.Client{} skipAmount := 0 // Make request request, _ := http.NewRequest("GET", projectReleasesEndpoint, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } // Get response responseData, err := ioutil.ReadAll(response.Body) var releasesJson interface{} err = json.Unmarshal(responseData, &releasesJson) // Map the returned data returnedReleases := releasesJson.(map[string]interface{}) // Returns the list of items, translate it to a map returnedItems := returnedReleases["Items"].([]interface{}) for true { // check to see if there's more to get fltItemsPerPage := returnedReleases["ItemsPerPage"].(float64) itemsPerPage := int(fltItemsPerPage) if len(returnedReleases["Items"].([]interface{})) == itemsPerPage { // Increment skip amount skipAmount += len(returnedReleases["Items"].([]interface{})) // Make request queryString := request.URL.Query() queryString.Set("skip", strconv.Itoa(skipAmount)) request.URL.RawQuery = queryString.Encode() response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var releasesJson interface{} err = json.Unmarshal(responseData, &releasesJson) returnedReleases = releasesJson.(map[string]interface{}) returnedItems = append(returnedItems, returnedReleases["Items"].([]interface{})...) } else { break } } return returnedItems } ```
# Promote a release not in the destination Source: https://octopus.com/docs/octopus-rest-api/examples/releases/promote-release-not-in-destination.md This script demonstrates how to programmatically find the latest deployment in each environment and compare releases. If they don't match then promote the release to the next environment. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Comma separated list of projects - Source Environment Name - Destination Environment Name ## Script
PowerShell (REST API) ```powershell $octopusUrl = "https://your-octopus-url" $apiKey = "API-YOUR-KEY" $projectNameList = "WebAPI,Web UI" $sourceEnvironmentName = "Production" $destinationEnvironmentName = "Staging" $spaceName = "Default" function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item ) if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$OctopusUrl/api/$EndPoint" } else { $url = "$OctopusUrl/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-Verbose $body Write-Host "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } Write-Host "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-Error "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-Error "Forbidden error returned from $url, please verify API key and try again" } else { Write-Host -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-Host $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details" } $spaceList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $null -item $null -endPoint "spaces?partialName=$([uri]::EscapeDataString($spaceName))&skip=0&take=100" $space = $spaceList.Items | Where-Object {$_.Name -eq $spaceName} $spaceId = $space.Id Write-Host "The space id for space name $spaceName is $spaceId" $sourceEnvironmentList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $spaceId -item $null -endPoint "environments?partialName=$([uri]::EscapeDataString($sourceEnvironmentName))&skip=0&take=100" $sourceEnvironment = $sourceEnvironmentList.Items | Where-Object {$_.Name -eq $sourceEnvironmentName} $sourceEnvironmentId = $sourceEnvironment.Id Write-Host "The environment id for environment name $sourceEnvironmentName is $sourceEnvironmentId" $destinationEnvironmentList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $spaceId -item $null -endPoint "environments?partialName=$([uri]::EscapeDataString($destinationEnvironmentName))&skip=0&take=100" $destinationEnvironment = $destinationEnvironmentList.Items | Where-Object {$_.Name -eq $destinationEnvironmentName} $destinationEnvironmentId = $destinationEnvironment.Id Write-Host "The environment id for environment name $destinationEnvironmentName is $destinationEnvironmentId" $splitProjectList = $projectNameList -split "," foreach ($projectName in $splitProjectList) { $projectList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $spaceId -item $null -endPoint "projects?partialName=$([uri]::EscapeDataString($projectName))&skip=0&take=100" $project = $projectList.Items | Where-Object {$_.Name -eq $projectName} $projectId = $project.Id Write-Host "The project id for project name $projectName is $projectId" Write-Host "I have all the Ids I need, I am going to find the most recent successful deployment now to $sourceEnvironmentName" $taskList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $null -item $null -endPoint "tasks?skip=0&environment=$($sourceEnvironmentId)&project=$($projectId)&name=Deploy&states=Success&spaces=$spaceId&includeSystem=false" if ($taskList.Items.Count -eq 0) { Write-Host "Unable to find a successful deployment for $projectName to $sourceEnvironmentName" continue } $lastDeploymentTask = $taskList.Items[0] $deploymentId = $lastDeploymentTask.Arguments.DeploymentId Write-Host "The id of the last deployment for $projectName to $sourceEnvironmentName is $deploymentId" $deploymentDetails = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $spaceId -item $null -endPoint "deployments/$deploymentId" $releaseId = $deploymentDetails.ReleaseId Write-Host "The release id for $deploymentId is $releaseId" $canPromote = $false Write-Host "I have all the Ids I need, I am going to find the most recent successful deployment now to $destinationEnvironmentName" $destinationTaskList = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $null -item $null -endPoint "tasks?skip=0&environment=$($destinationEnvironmentId)&project=$($projectId)&name=Deploy&states=Success&spaces=$spaceId&includeSystem=false" if ($destinationTaskList.Items.Count -eq 0) { Write-Host "The destination has no releases, promoting." $canPromote = $true } $lastDestinationDeploymentTask = $destinationTaskList.Items[0] $lastDestinationDeploymentId = $lastDestinationDeploymentTask.Arguments.DeploymentId Write-host "The deployment id of the last deployment for $projectName to $destinationEnvironmentName is $lastDestinationDeploymentId" $lastDestinationDeploymentDetails = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "GET" -spaceId $spaceId -item $null -endPoint "deployments/$lastDestinationDeploymentId" $lastDestinationReleaseId = $lastDestinationDeploymentDetails.ReleaseId Write-Host "The release id for the last deployment to the destination is $lastDestinationReleaseId" if ($lastDestinationReleaseId -ne $releaseId) { Write-Host "The releases on the source and destination don't match, promoting" $canPromote = $true } else { Write-Host "The releases match, not promoting" } if ($canPromote -eq $false) { Write-Host "Nothing to promote for $projectName" continue } $newDeployment = @{ EnvironmentId = $destinationEnvironmentId ReleaseId = $releaseId ExcludedMachines = @() ForcePackageDownload = $false ForcePackageRedeployment = $false FormValue = @{} QueueTime = $null QueueTimeExpiry = $null SkipActions = @() SpecificMachineIds = @() TenantId = $null UseGuidedFailure = $false } $newDeployment = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $apiKey -method "POST" -spaceId $spaceId -item $newDeployment -endPoint "deployments" } ```
PowerShell (Octopus.Client) ```powershell $ErrorActionPreference = "Stop"; # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $sourceEnvironmentName = "Production" $destinationEnvironmentName = "Test" $projectNameList = @("MyProject") # Establish a connection $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get repository specific to space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get the source environment $sourceEnvironment = $repositoryForSpace.Environments.FindByName($sourceEnvironmentName) # Get the destination environment $destinationEnvironment = $repositoryForSpace.Environments.FindByName($destinationEnvironmentName) # Loop through the projects foreach ($name in $projectNameList) { # Get project object $project = $repositoryForSpace.Projects.FindByName($name) Write-Host "The project Id for project name $name is $($project.Id)" Write-Host "I have all the Ids I need, I am going to find the most recent successful deployment now to $sourceEnvironmentName" # Get the deployment tasks associated with this space, project, and environment $taskList = $repositoryForSpace.Deployments.FindBy(@($project.Id), @($sourceEnvironment.Id), 0, $null).Items | Where-Object {$repositoryForSpace.Tasks.Get($_.TaskId).State -eq [Octopus.Client.Model.TaskState]::Success} # Check to see if any tasks were returned if ($taskList.Count -eq 0) { Write-Host "Unable to find a successful deployment for project $($project.Name) to $($sourceEnvironment.Name)" continue } # Grab the last successful deployment $lastDeploymentTask = $taskList[0] Write-Host "The id of the last deployment for $($project.Name) to $($sourceEnvironment.Name) is $($lastDeploymentTask.Id)" # Get the deployment object Write-Host "The release id for $deploymentId is $($lastDeploymentTask.ReleaseId)" $canPromote = $false Write-Host "I have all the Ids I need, I am going to find the most recent successful deployment to $($destinationEnvironment.Name)" # Get the task list for the destination environment $destinationTaskList = $repositoryForSpace.Deployments.FindBy(@($project.Id), @($destinationEnvironment.Id), 0, $null).Items | Where-Object {$repositoryForSpace.Tasks.Get($_.TaskId).State -eq [Octopus.Client.Model.TaskState]::Success} if ($destinationTaskList.Count -eq 0) { Write-Host "The destination has no releases, promoting." $canPromote = $true } # Get the last destination deployment $lastDestinationDeploymentTask = $destinationTaskList[0] Write-Host "The deployment id of the last deployment for $($project.Name) to $($destinationEnvironment.Name) is $($lastDestinationDeploymentTask.Id)" Write-Host "The release id of the last deployment to the destination is $($lastDestinationDeploymentTask.ReleaseId)" if ($lastDestinationDeploymentTask.ReleaseId -ne $lastDeploymentTask.ReleaseId) { Write-Host "The releases on the source and destination don't match, promoting" $canPromote = $true } else { Write-Host "The releases match, not promoting" } if ($canPromote -eq $false) { Write-Host "Nothing to promote for $($project.Name)" continue } # Create new deployment object $deployment = New-Object Octopus.Client.Model.DeploymentResource $deployment.EnvironmentId = $destinationEnvironment.Id $deployment.ReleaseId = $lastDeploymentTask.ReleaseId # Execute the deployment $repositoryForSpace.Deployments.Create($deployment) } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); var spaceName = "Default"; var sourceEnvironmentName = "Production"; var destinationEnvironmentName = "Test"; string[] projectList = new string[] { "MyProject" }; var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get the source environment var sourceEnvironment = repositoryForSpace.Environments.FindByName(sourceEnvironmentName); // Get the destination environment var destinationEnvironment = repositoryForSpace.Environments.FindByName(destinationEnvironmentName); // Loop through project names foreach (string projectName in projectList) { // Get the project var project = repositoryForSpace.Projects.FindByName(projectName); Console.WriteLine(string.Format("The project id for the project name {0} is {2}", project.Name, project.Id)); Console.WriteLine(string.Format("I have all the Ids I need, I am going to find the most recent successful deployment to {0}", sourceEnvironment.Name)); // Get a list of deployments to the environment var sourceTaskList = repositoryForSpace.Deployments.FindBy(new string[] { project.Id }, new string[] { sourceEnvironment.Id }, 0, null).Items.Where(d => repositoryForSpace.Tasks.Get(d.TaskId).State == TaskState.Success).ToArray(); if (sourceTaskList.Length == 0) { Console.WriteLine(string.Format("Unable to find a successful deployment for project {0} to {1}", project.Name, sourceEnvironment.Name)); continue; } // Grab the latest task var lastSourceDeploymentTask = sourceTaskList[0]; Console.WriteLine(string.Format("The Id of the last deployment for project {0} to {1} is {2}", project.Name, sourceEnvironment.Name, lastSourceDeploymentTask.Id)); Console.WriteLine(string.Format("The release Id for {0} is {1}", lastSourceDeploymentTask.Id, lastSourceDeploymentTask.ReleaseId)); bool canPromote = false; Console.WriteLine(string.Format("I have all the Ids I need, I am going to find the most recent successful deployment to {0}", destinationEnvironment.Name)); // Get task list for destination var destinationTaskList = repositoryForSpace.Deployments.FindBy(new string[] { project.Id }, new string[] { destinationEnvironment.Id }, 0, null).Items.Where(d => repositoryForSpace.Tasks.Get(d.TaskId).State == TaskState.Success).ToArray(); ; if (destinationTaskList.Length == 0) { Console.WriteLine(string.Format("The destination has no releases, promoting.")); canPromote = true; } // Get the last deployment to destination var lastDestinationDeploymentTask = destinationTaskList[0]; Console.WriteLine(string.Format("The deployment Id of the last deployment for {0} to {1} is {2}", project.Name, destinationEnvironment.Name, lastDestinationDeploymentTask.Id)); Console.WriteLine(string.Format("The release Id of the last deployment to the destination is {0}", lastDestinationDeploymentTask.ReleaseId)); if (lastSourceDeploymentTask.ReleaseId != lastDestinationDeploymentTask.ReleaseId) { Console.WriteLine(string.Format("The releases on the source and destination don't match, promoting")); canPromote = true; } else { Console.WriteLine("The releases match, not promoting"); } if (!canPromote) { Console.WriteLine(string.Format("Nothing to promote for {0}", project.Name)); } // Create new deployment object var deployment = new Octopus.Client.Model.DeploymentResource(); deployment.EnvironmentId = destinationEnvironment.Id; deployment.ReleaseId = lastSourceDeploymentTask.ReleaseId; // Queue the deployment repositoryForSpace.Deployments.Create(deployment); } ```
Python3 ```python import json import requests from requests.api import get, head import csv def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' source_environment_name = 'Production' destination_environment_name = 'Test' project_name_list = ['MyProject'] # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get source environment uri = '{0}/api/{1}/environments'.format(octopus_server_uri, space['Id']) environments = get_octopus_resource(uri, headers) source_environment = next((x for x in environments if x['Name'] == source_environment_name), None) destination_environment = next((x for x in environments if x['Name'] == destination_environment_name), None) print ('The space Id for the space name {0} is {1}'.format(space['Name'], space['Id'])) print ('The environment Id for the environment {0} is {1}'.format(source_environment['Name'], source_environment['Id'])) print ('The environment Id for the environment {0} is {1}'.format(destination_environment['Name'], destination_environment['Id'])) # Get all projects uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) # Loop through projects for project_name in project_name_list: # Get the project project = next((x for x in projects if x['Name'] == project_name), None) print('The project Id for project name {0} is {1}'.format(project['Name'], project['Id'])) print('I have all the Ids I need, I am going to find the most recent successful deployment to {0}'.format(source_environment['Name'])) uri = '{0}/api/tasks?environment={1}&project={2}&name=Deploy&states=Success&spaces={3}&includesystem=false'.format(octopus_server_uri, source_environment['Id'], project['Id'], space['Id']) source_task_list = get_octopus_resource(uri, headers) if len(source_task_list) == 0: print('Unable to find a successful deployment for {0} to {1}'.format(project['Name'], source_environment['Name'])) continue # Get last deployment task last_source_deployment_task = source_task_list[0] last_source_deployment_id = last_source_deployment_task['Arguments']['DeploymentId'] print ('The Id of the last deployment for {0} to {1} is {2}'.format(project['Name'], source_environment['Name'], last_source_deployment_id)) # Get deployment details uri = '{0}/api/{1}/deployments/{2}'.format(octopus_server_uri, space['Id'], last_source_deployment_id) last_source_deployment = get_octopus_resource(uri, headers) last_source_release_id = last_source_deployment['ReleaseId'] print ('The release Id for {0} is {1}'.format(last_source_deployment_id, last_source_release_id)) can_promote = False print ('I have all the Ids I need, I am going to find the most recent successful deployment to {0}'.format(destination_environment['Name'])) uri = '{0}/api/tasks?environment={1}&project={2}&name=Deploy&states=Success&spaces={3}&includesystem=false'.format(octopus_server_uri, destination_environment['Id'], project['Id'], space['Id']) destination_task_list = get_octopus_resource(uri, headers) if len(destination_task_list) == 0: print('The destination has no releases, promoting') can_promote = True last_destination_deployment_task = destination_task_list[0] last_destination_deployment_id = last_destination_deployment_task['Arguments']['DeploymentId'] print('The deployment Id of the last deployment for {0} to {1} is {2}'.format(project['Name'], destination_environment['Name'], last_destination_deployment_id)) # Get deployment details uri = '{0}/api/{1}/deployments/{2}'.format(octopus_server_uri, space['Id'], last_destination_deployment_id) last_destination_deployment = get_octopus_resource(uri, headers) last_destination_release_id = last_destination_deployment['ReleaseId'] print('The release Id for the last deployment to the destination is {0}'.format(last_destination_release_id)) if last_destination_release_id != last_source_release_id: print('The releases on the source and destination do not match, promoting') can_promote = True else: print('Nothing to promote for {0}'.format(project['Name'])) continue # Create deployment object new_deployment = { 'EnvironmentId': destination_environment['Id'], 'ReleaseId': last_source_release_id } # Post deployment uri = '{0}/api/{1}/deployments'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=new_deployment) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectNames := []string{"MyProject"} sourceEnvironmentName := "Production" destinationEnvironmentName := "Test" // Get space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get source environment sourceEnvironment := GetEnvironment(apiURL, APIKey, space, sourceEnvironmentName) destinationEnvironment := GetEnvironment(apiURL, APIKey, space, destinationEnvironmentName) // Loop through projects for _, projectName := range projectNames { // Get the project project := GetProject(apiURL, APIKey, space, projectName) fmt.Printf("The project Id for project name %[1]s is %[2]s \n", project.Name, project.ID) fmt.Printf("I have all the Ids I need, I am going to find the most recent successful deployment to %[1]s \n", sourceEnvironment.Name) // Get task list taskQuery := octopusdeploy.TasksQuery{ Environment: sourceEnvironment.ID, Project: project.ID, States: []string{"Success"}, Spaces: []string{space.ID}, } sourceTaskList, err := client.Tasks.Get(taskQuery) if err != nil { log.Println(err) } if len(sourceTaskList.Items) == 0 { fmt.Printf("Unable to find a successful deployment for project %[1]s to %[2]s \n", project.Name, sourceEnvironment.Name) continue } latestSourceDeploymentTask := sourceTaskList.Items[0] latestSourceDeploymentId := latestSourceDeploymentTask.Arguments["DeploymentId"].(string) fmt.Printf("The Id of the last deployment for project %[1]s to %[2]s is %[3]s \n", project.Name, sourceEnvironment.Name, latestSourceDeploymentId) latestSourceDeployment, err := client.Deployments.GetByID(latestSourceDeploymentId) if err != nil { log.Println(err) } fmt.Printf("The release Id for %[1]s is %[2]s \n", latestSourceDeployment.ID, latestSourceDeployment.ReleaseID) canPromote := false fmt.Printf("I have all the Ids I need, I am going to find the recent successful deployment to %[1]s \n", destinationEnvironment.Name) // Get destination task list taskQuery.Environment = destinationEnvironment.ID destinationTaskList, err := client.Tasks.Get(taskQuery) if err != nil { log.Println(err) } if len(destinationTaskList.Items) == 0 { fmt.Printf("The destination has no releases, promoting \n") canPromote = true } // Get the latest task latestDestinationDeploymentTask := destinationTaskList.Items[0] latestDestinationDeploymentId := latestDestinationDeploymentTask.Arguments["DeploymentId"].(string) fmt.Printf("The Id of the last deployment for project %[1]s to %[2]s is %[3]s \n", project.Name, destinationEnvironment.Name, latestDestinationDeploymentId) latestDestinationDeployment, err := client.Deployments.GetByID(latestDestinationDeploymentId) if err != nil { log.Println(err) } fmt.Printf("The release Id for %[1]s is %[2]s \n", latestDestinationDeployment.ID, latestDestinationDeployment.ReleaseID) if latestDestinationDeployment.ReleaseID != latestSourceDeployment.ReleaseID { fmt.Printf("The releases on the source and destination do not match, promoting \n") canPromote = true } else { fmt.Printf("The releases match, not promoting \n") } if !canPromote { fmt.Printf("Nothing to promote for project %[1]s \n", project.Name) } deployment := octopusdeploy.NewDeployment(destinationEnvironment.ID, latestSourceDeployment.ReleaseID) client.Deployments.Add(deployment) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetUserRoleByName(client *octopusdeploy.Client, roleName string) *octopusdeploy.UserRole { // Get all user roles userRoles, err := client.UserRoles.GetAll() if err != nil { log.Println(err) } // Loop through roles for _, role := range userRoles { if role.Name == roleName { return role } } return nil } func GetEnvironment(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, environmentName string) *octopusdeploy.Environment { // Get client for space client := octopusAuth(octopusURL, APIKey, space.ID) // Get environment environmentsQuery := octopusdeploy.EnvironmentsQuery { Name: environmentName, } environments, err := client.Environments.Get(environmentsQuery) if err != nil { log.Println(err) } // Loop through results for _, environment := range environments.Items { if environment.Name == environmentName { return environment } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
# Update release variable snapshot Source: https://octopus.com/docs/octopus-rest-api/examples/releases/update-release-variable-snapshot.md This script demonstrates how to programmatically update the variable snapshot for a release. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - Name of the channel - Version of the release ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $projectName = "MyProject" $releaseVersion = "1.0.0.0" $channelName = "Default" $spaceName = "default" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get channel $channel = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/channels" -Headers $header).Items | Where-Object {$_.Name -eq $channelName} # Get release $release = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/releases" -Headers $header).Items | Where-Object {$_.Version -eq $releaseVersion -and $_.ChannelId -eq $channel.Id} # Update the variable snapshot Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/releases/$($release.Id)/snapshot-variables" -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $channelName = "default" $releaseVersion = "1.0.0.0" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get channel $channel = $repositoryForSpace.Channels.FindOne({param($c) $c.Name -eq $channelName -and $c.ProjectId -eq $project.Id}) # Get the release $release = $repositoryForSpace.Releases.FindOne({param($r) $r.ChannelId -eq $channel.Id -and $r.ProjectId -eq $project.Id -and $r.Version -eq $releaseVersion}) # Get new variable snapshot $repositoryForSpace.Releases.SnapshotVariables($release) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; string projectName = "MyProject"; string channelName = "Default"; string releaseVersion = "1.0.0.0"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get channel var channel = repositoryForSpace.Channels.FindOne(r => r.ProjectId == project.Id && r.Name == channelName); // Get release var release = repositoryForSpace.Releases.FindOne(r => r.ProjectId == project.Id && r.ChannelId == channel.Id && r.Version == releaseVersion); // Update variable snapshot repositoryForSpace.Releases.SnapshotVariables(release); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
# Deployments Per Deployment Target Role Report Source: https://octopus.com/docs/octopus-rest-api/examples/reports/deployments-per-target-role-report.md The Octopus Web Portal allows you to see what deployments have gone out to a specific deployment target, but it doesn't provide you with a list of deployments for all the deployment targets in a role. This script demonstrates how to generate such a report. :::figure ![Sample deployments per target role](/docs/img/octopus-rest-api/examples/reports/images/deployments-per-target-role-report.png) ::: **Please note:** The report is generated as a CSV file, formatting was added to the screenshot to make it easier to read. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Report Path - Space Name - Target Role - Days to Query
PowerShell (REST API) ```powershell $octopusUrl = "https://your-octopus-url" $octopusApiKey = "API-YOUR-KEY" $reportPath = "./Report.csv" $spaceName = "Default" $targetRole = "hello-world" $daysToQuery = 365 $cachedResults = @{} function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item, $ignoreCache ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose $body Write-OctopusInformation "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } if (($null -eq $ignoreCache -or $ignoreCache -eq $false) -and $method.ToUpper().Trim() -eq "GET") { Write-OctopusVerbose "Checking to see if $url is already in the cache" if ($cachedResults.ContainsKey($url) -eq $true) { Write-OctopusVerbose "$url is already in the cache, returning the result" return $cachedResults[$url] } } else { Write-OctopusVerbose "Ignoring cache." } Write-OctopusVerbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' if ($cachedResults.ContainsKey($url) -eq $true) { $cachedResults.Remove($url) } Write-OctopusVerbose "Adding $url to the cache" $cachedResults.add($url, $result) return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-OctopusCritical "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-OctopusCritical "Forbidden error returned from $url, please verify API key and try again" } else { Write-OctopusVerbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-OctopusVerbose $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details" } function Get-OctopusItemList { param( $itemType, $endpoint, $spaceId, $octopusUrl, $octopusApiKey ) if ($null -ne $spaceId) { Write-OctopusVerbose "Pulling back all the $itemType in $spaceId" } else { Write-OctopusVerbose "Pulling back all the $itemType for the entire instance" } if ($endPoint -match "\?+") { $endpointWithParams = "$($endPoint)&skip=0&take=10000" } else { $endpointWithParams = "$($endPoint)?skip=0&take=10000" } $itemList = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint $endpointWithParams -spaceId $spaceId -apiKey $octopusApiKey -method "GET" if ($itemList -is [array]) { Write-OctopusVerbose "Found $($itemList.Length) $itemType." return ,$itemList } else { Write-OctopusVerbose "Found $($itemList.Items.Length) $itemType." return ,$itemList.Items } } function Get-OctopusItemByName { param ( $itemType, $itemName, $endPoint, $spaceId, $octopusUrl, $octopusApiKey ) $itemList = Get-OctopusItemList -endpoint "$($endpoint)?partialName=$([uri]::EscapeDataString($itemName))" -itemType $itemType -spaceId $spaceId -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $filteredItem = $itemList | Where-Object { $_.Name.ToLower().Trim() -eq $itemName.ToLower().Trim() } if ($null -eq $filteredItem) { Write-OctopusInformation "Unable to find the $itemType $itemName" exit 1 } return $filteredItem } function Test-OctopusObjectHasProperty { param( $objectToTest, $propertyName ) $hasProperty = Get-Member -InputObject $objectToTest -Name $propertyName -MemberType Properties if ($hasProperty) { Write-OctopusVerbose "$propertyName property found." return $true } else { Write-OctopusVerbose "$propertyName property missing." return $false } } $space = Get-OctopusItemByName -itemType "Space" -endPoint "spaces" -itemName $spaceName -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $environmentList = Get-OctopusItemList -itemType "Environments" -endpoint "environments" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $projectList = Get-OctopusItemList -itemType "Projects" -endpoint "projects" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $deploymentTargetList = Get-OctopusItemList -itemType "DeploymentTargets" -endpoint "machines?roles=$($targetRole)" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $deploymentTargetsDeployments = @() $minDate = Get-Date $minDate = $minDate.AddDays(($daysToQuery * -1)) Write-Host "The minimum date allowed is: $minDate" foreach ($deploymentTarget in $deploymentTargetList) { $taskList = Get-OctopusItemList -itemType "Deployment Target Deployments" -endpoint "machines/$($deploymentTarget.Id)/tasks" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($task in $taskList) { if ($task.QueueTime -lt $minDate) { break } if ((Test-OctopusObjectHasProperty -propertyName "DeploymentId" -objectToTest $task.Arguments) -eq $false) { continue } $deployment = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint "deployments/$($task.Arguments.DeploymentId)" -spaceId $spaceId -apiKey $octopusApiKey -method "GET" $environment = $environmentList | Where-Object { $_.Id -eq $deployment.EnvironmentId } $project = $projectList | Where-Object { $_.Id -eq $deployment.ProjectId } $release = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint "releases/$($deployment.ReleaseId)" -spaceId $spaceId -apiKey $octopusApiKey -method "GET" $deploymentTargetsDeployments += @{ DeploymentTargetName = $deploymentTarget.Name DeploymentTargetId = $deploymentTarget.Id DeploymentState = $task.State Environment = $environment.Name Project = $project.Name ReleaseVersion = $release.Version QueuedTime = $task.QueueTime } } } if (Test-Path $reportPath) { Remove-Item $reportPath } New-Item $reportPath -ItemType File Add-Content -Path $reportPath -Value "Machine Name,Environment Name,Project Name,Release Version,Deployment State,Queue DateTime,Machine Id" Foreach ($deployedToMachine in $deploymentTargetsDeployments) { Add-Content -Path $reportPath -Value "$($deployedToMachine.DeploymentTargetName),$($deployedToMachine.Environment),$($deployedToMachine.Project),$($deployedToMachine.ReleaseVersion),$($deployedToMachine.DeploymentState),$($deployedToMachine.QueuedTime),$($deployedToMachine.DeploymentTargetId)" } ```
# Environment permissions report Source: https://octopus.com/docs/octopus-rest-api/examples/reports/environment-permissions-report.md The Octopus Web Portal provides the ability to see the permissions from a user's point of view. This script demonstrates how to generate a report for a specific permission for specific environments. For example, what users have permissions to deploy to **Production.** This report will look for teams scoped to a role with a specific environment (Production) or no environments. For example, you want to find out all the users who have permissions to deploy to **Production**. If a user is on a team scoped to the role `Deployment Creator` with no environments that user will show up in the report with an environment scoping of **All** because they have permissions to deploy to **Production**. :::figure ![Sample environment permissions report](/docs/img/octopus-rest-api/examples/reports/images/environment-permissions-example.png) ::: **Please note:** The report is generated as a CSV file, formatting was added to the screenshot to make it easier to read. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Report Path - Space Filter - Environment Filter - User Filter - Permission Name The filters allow you to choose which space(s), project(s), and user(s) to generate a report for. They all have the same features. - `all` will return the results for all spaces/environments. - Wildcard or `*` will return all spaces/environments matching the wildcard search. - Specific name will only show the exact matching spaces/environments. The filters support comma-separated entries. Setting the Environment Filter to `Test,Prod*` will find all environments with the display name of `Test` or that start with `Prod`.
PowerShell (REST API) ```powershell $octopusUrl = "https://your-octopus-url" $octopusApiKey = "API-YOUR-KEY" $reportPath = "./Report.csv" $spaceFilter = "Permissions" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $environmentFilter = "Production" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $userFilter = "all" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $permissionToCheck = "DeploymentCreate" $cachedResults = @{} function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item, $ignoreCache ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose $body Write-OctopusInformation "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } if (($null -eq $ignoreCache -or $ignoreCache -eq $false) -and $method.ToUpper().Trim() -eq "GET") { Write-OctopusVerbose "Checking to see if $url is already in the cache" if ($cachedResults.ContainsKey($url) -eq $true) { Write-OctopusVerbose "$url is already in the cache, returning the result" return $cachedResults[$url] } } else { Write-OctopusVerbose "Ignoring cache." } Write-OctopusVerbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' if ($cachedResults.ContainsKey($url) -eq $true) { $cachedResults.Remove($url) } Write-OctopusVerbose "Adding $url to the cache" $cachedResults.add($url, $result) return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-OctopusCritical "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-OctopusCritical "Forbidden error returned from $url, please verify API key and try again" } else { Write-OctopusVerbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-OctopusVerbose $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details" } function Get-OctopusItemList { param( $itemType, $endpoint, $spaceId, $octopusUrl, $octopusApiKey ) if ($null -ne $spaceId) { Write-OctopusVerbose "Pulling back all the $itemType in $spaceId" } else { Write-OctopusVerbose "Pulling back all the $itemType for the entire instance" } if ($endPoint -match "\?+") { $endpointWithParams = "$($endPoint)&skip=0&take=10000" } else { $endpointWithParams = "$($endPoint)?skip=0&take=10000" } $itemList = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint $endpointWithParams -spaceId $spaceId -apiKey $octopusApiKey -method "GET" if ($itemList -is [array]) { Write-OctopusVerbose "Found $($itemList.Length) $itemType." return $itemList } else { Write-OctopusVerbose "Found $($itemList.Items.Length) $itemType." return $itemList.Items } } function Test-OctopusObjectHasProperty { param( $objectToTest, $propertyName ) $hasProperty = Get-Member -InputObject $objectToTest -Name $propertyName -MemberType Properties if ($hasProperty) { Write-OctopusVerbose "$propertyName property found." return $true } else { Write-OctopusVerbose "$propertyName property missing." return $false } } function Get-UserPermission { param ( $space, $project, $userRole, $projectPermissionList, $permissionToCheck, $environmentList, $tenantList, $user, $scopedRole, $includeScope, $projectEnvironmentList ) if ($userRole.GrantedSpacePermissions -notcontains $permissionToCheck) { return $projectPermissionList } $newPermission = @{ DisplayName = $user.DisplayName UserId = $user.Id Environments = @() Tenants = @() IncludeScope = $includeScope } if ($includeScope -eq $true) { foreach ($environmentId in $scopedRole.EnvironmentIds) { if ($projectEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "The role is scoped to environment $environmentId, but the environment is not assigned to $($project.Name), excluding from this project's report" continue } $environment = $environmentList | Where-Object { $_.Id -eq $environmentId } $newPermission.Environments += @{ Id = $environment.Id Name = $environment.Name } } if ($scopedRole.EnvironmentIds.Length -gt 0 -and $newPermission.Environments.Length -le 0) { Write-OctopusVerbose "The role is scoped to environments, but none of the environments are assigned to $($project.Name). This user role does not apply to this project." return @($projectPermissionList) } foreach ($tenantId in $scopedRole.tenantIds) { $tenant = $tenantList | Where-Object { $_.Id -eq $tenantId } if ((Test-OctopusObjectHasProperty -objectToTest $tenant.ProjectEnvironments -propertyName $project.Id) -eq $false) { Write-OctopusVerbose "The role is scoped to tenant $($tenant.Name), but the tenant is not assigned to $($project.Name), excluding the tenant from this project's report." continue } $newPermission.Tenants += @{ Id = $tenant.Id Name = $tenant.Name } } if ($scopedRole.TenantIds.Length -gt 0 -and $newPermission.Tenants.Length -le 0) { Write-OctopusVerbose "The role is scoped to tenants, but none of the tenants are assigned to $($project.Name). This user role does not apply to this project." return @($projectPermissionList) } } $existingPermission = $projectPermissionList | Where-Object { $_.UserId -eq $newPermission.UserId } if ($null -eq $existingPermission) { Write-OctopusVerbose "This is the first time we've seen $($user.DisplayName) for this permission. Adding the permission to the list." $projectPermissionList += $newPermission return @($projectPermissionList) } if ($existingPermission.Environments.Length -eq 0 -and $existingPermission.Tenants.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has no scoping for environments or tenants for this project, they have the highest level, no need to improve it." return @($projectPermissionList) } if ($existingPermission.Environments.Length -gt 0 -and $newPermission.Environments.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has scoping to environments, but the new permission doesn't have any environment scoping, removing the scoping" $existingPermission.Environments = @() } elseif ($existingPermission.Environments.Length -gt 0 -and $newPermission.Environments.Length -gt 0) { foreach ($item in $newPermission.Environments) { $existingItem = $existingPermission.Environments | Where-Object { $_.Id -eq $item.Id } if ($null -eq $existingItem) { Write-OctopusVerbose "$($user.DisplayName) is not yet scoped to the environment $($item.Name), adding it." $existingPermission.Environments += $item } } } if ($existingPermission.Tenants.Length -gt 0 -and $newPermission.Tenants.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has scoping to tenants, but the new permission doesn't have any tenant scoping, removing the scoping" $existingPermission.Tenants = @() } elseif ($existingPermission.Tenants.Length -gt 0 -and $newPermission.Tenants.Length -gt 0) { foreach ($item in $newPermission.Tenants) { $existingItem = $existingPermission.Tenants | Where-Object { $_.Id -eq $item.Id } if ($null -eq $existingItem) { Write-OctopusVerbose "$($user.DisplayName) is not yet scoped to the tenant $($item.Name), adding it." $existingPermission.Tenants += $item } } } return @($projectPermissionList) } function Write-PermissionList { param ( $permissionName, $permissionList, $permission, $reportPath ) foreach ($permissionScope in $permissionList) { $permissionForCSV = @{ Space = $permission.SpaceName Project = $permission.Name PermissionName = $permissionName User = $permissionScope.DisplayName EnvironmentScope = "" TenantScope = "" } if ($permissionScope.IncludeScope -eq $false) { $permissionForCSV.EnvironmentScope = "N/A" $permissionForCSV.TenantScope = "N/A" } else { if ($permissionScope.Environments.Length -eq 0) { $permissionForCSV.EnvironmentScope = "All" } else { $permissionForCSV.EnvironmentScope = $($permissionScope.Environments.Name) -join ";" } if ($permissionScope.TenantScope.Length -eq 0) { $permissionForCSV.TenantScope = "All" } else { $permissionForCSV.TenantScope = $($permissionScope.Tenants.Name) -join ";" } } $permissionAsString = """$($permissionForCSV.Space)"",""$($permissionForCSV.Project)"",""$($permissionForCSV.PermissionName)"",""$($permissionForCSV.User)"",""$($permissionForCSV.EnvironmentScope)"",""$($permissionForCSV.TenantScope)""" Add-Content -Path $reportPath -Value $permissionAsString } } function Get-EnvironmentsScopedToProject { param ( $project, $octopusApiKey, $octopusUrl, $spaceId ) $scopedEnvironmentList = @() $projectChannels = Get-OctopusItemList -itemType "Channels" -endpoint "projects/$($project.Id)/channels" -spaceId $spaceId -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($channel in $projectChannels) { $lifecycleId = $channel.LifecycleId if ($null -eq $lifecycleId) { $lifecycleId = $project.LifecycleId } $lifecyclePreview = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $octopusApiKey -endPoint "lifecycles/$lifecycleId/preview" -spaceId $spaceId -method "GET" foreach ($phase in $lifecyclePreview.Phases) { foreach ($environmentId in $phase.AutomaticDeploymentTargets) { if ($scopedEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "Adding $environmentId to $($project.Name) environment list" $scopedEnvironmentList += $environmentId } } foreach ($environmentId in $phase.OptionalDeploymentTargets) { if ($scopedEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "Adding $environmentId to $($project.Name) environment list" $scopedEnvironmentList += $environmentId } } } } return $scopedEnvironmentList } function New-OctopusFilteredList { param( $itemList, $itemType, $filters ) $filteredList = @() Write-OctopusSuccess "Creating filter list for $itemType with a filter of $filters" if ([string]::IsNullOrWhiteSpace($filters) -eq $false -and $null -ne $itemList) { $splitFilters = $filters -split "," foreach($item in $itemList) { foreach ($filter in $splitFilters) { Write-OctopusVerbose "Checking to see if $filter matches $($item.Name)" if ([string]::IsNullOrWhiteSpace($filter)) { continue } if (($filter).ToLower() -eq "all") { Write-OctopusVerbose "The filter is 'all' -> adding $($item.Name) to $itemType filtered list" $filteredList += $item } elseif ((Test-OctopusObjectHasProperty -propertyName "Name" -objectToTest $item) -and $item.Name -like $filter) { Write-OctopusVerbose "The filter $filter matches $($item.Name), adding $($item.Name) to $itemType filtered list" $filteredList += $item } elseif ((Test-OctopusObjectHasProperty -propertyName "DisplayName" -objectToTest $item) -and $item.DisplayName -like $filter) { Write-OctopusVerbose "The filter $filter matches $($item.DisplayName), adding $($item.DisplayName) to $itemType filtered list" $filteredList += $item } else { Write-OctopusVerbose "The item $($item.Name) does not match filter $filter" } } } } else { Write-OctopusWarning "The filter for $itemType was not set." } return $filteredList } $spaceList = Get-OctopusItemList -itemType "Spaces" -endpoint "spaces" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $spaceList = New-OctopusFilteredList -itemType "Spaces" -itemList $spaceList -filters $spaceFilter $userRolesList = Get-OctopusItemList -itemType "User Roles" -endpoint "userroles" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $userList = Get-OctopusItemList -itemType "Users" -endpoint "users" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $userList = New-OctopusFilteredList -itemType "Users" -itemList $userList -filters $userFilter $permissionsReport = @() foreach ($space in $spaceList) { $projectList = Get-OctopusItemList -itemType "Projects" -endpoint "projects" -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $environmentList = Get-OctopusItemList -itemType "Environments" -endpoint "environments" -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $environmentList = New-OctopusFilteredList -itemType "Environments" -itemList $environmentList -filters $environmentFilter $tenantList = Get-OctopusItemList -itemType "Tenants" -endpoint "tenants" -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($project in $projectList) { $projectPermission = @{ Name = $project.Name SpaceName = $space.Name Permissions = @() } $projectEnvironmentList = @(Get-EnvironmentsScopedToProject -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl -spaceId $space.Id -project $project) foreach ($user in $userList) { $userTeamList = Get-OctopusItemList -itemType "User $($user.DisplayName) Teams" -endpoint "users/$($user.Id)/teams?spaces=$($space.Id)&includeSystem=True" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($userTeam in $userTeamList) { $scopedRolesList = Get-OctopusItemList -itemType "Team $($userTeam.Name) Scoped Roles" -endpoint "teams/$($userTeam.Id)/scopeduserroles" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($scopedRole in $scopedRolesList) { if ($scopedRole.SpaceId -ne $space.Id) { Write-OctopusVerbose "The scoped role is not for the current space, moving on to next role." continue } if ($scopedRole.ProjectIds.Length -gt 0 -and $scopedRole.ProjectIds -notcontains $project.Id -and $scopedRole.ProjectGroupIds.Length -eq 0) { Write-OctopusVerbose "The scoped role is associated with projects, but not $($project.Name), moving on to next role." continue } if ($scopedRole.ProjectGroupIds.Length -gt 0 -and $scopedRole.ProjectGroupIds -notcontains $project.ProjectGroupId -and $scopedRole.ProjectIds.Length -eq 0) { Write-OctopusVerbose "The scoped role is associated with projects groups, but not the one for $($project.Name), moving on to next role." continue } $userRole = $userRolesList | Where-Object {$_.Id -eq $scopedRole.UserRoleId} $projectPermission.Permissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.Permissions -permissionToCheck $permissionToCheck -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) } } } $permissionsReport += $projectPermission } } if (Test-Path $reportPath) { Remove-Item $reportPath } New-Item $reportPath -ItemType File Add-Content -Path $reportPath -Value "Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping" foreach ($permission in $permissionsReport) { Write-PermissionList -permissionName $permissionToCheck -permissionList $permission.Permissions -permission $permission -reportPath $reportPath } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $reportPath = "./Report.csv" $spaceFilter = "Permissions" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $environmentFilter = "Production" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $permissionToCheck = "DeploymentCreate" $cachedResults = @{} function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Test-OctopusObjectHasProperty { param( $objectToTest, $propertyName ) $hasProperty = Get-Member -InputObject $objectToTest -Name $propertyName -MemberType Properties if ($hasProperty) { Write-OctopusVerbose "$propertyName property found." return $true } else { Write-OctopusVerbose "$propertyName property missing." return $false } } function Get-UserPermission { param ( $space, $project, $userRole, $projectPermissionList, $permissionToCheck, $environmentList, $tenantList, $user, $scopedRole, $includeScope, $projectEnvironmentList ) if ($userRole.GrantedSpacePermissions -notcontains $permissionToCheck) { return $projectPermissionList } $newPermission = @{ DisplayName = $user.DisplayName UserId = $user.Id Environments = @() Tenants = @() IncludeScope = $includeScope } if ($includeScope -eq $true) { foreach ($environmentId in $scopedRole.EnvironmentIds) { if ($projectEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "The role is scoped to environment $environmentId, but the environment is not assigned to $($project.Name), excluding from this project's report" continue } $environment = $environmentList | Where-Object { $_.Id -eq $environmentId } $newPermission.Environments += @{ Id = $environment.Id Name = $environment.Name } } if ($scopedRole.EnvironmentIds.Length -gt 0 -and $newPermission.Environments.Length -le 0) { Write-OctopusVerbose "The role is scoped to environments, but none of the environments are assigned to $($project.Name). This user role does not apply to this project." return @($projectPermissionList) } foreach ($tenantId in $scopedRole.tenantIds) { $tenant = $tenantList | Where-Object { $_.Id -eq $tenantId } if ((Test-OctopusObjectHasProperty -objectToTest $tenant.ProjectEnvironments -propertyName $project.Id) -eq $false) { Write-OctopusVerbose "The role is scoped to tenant $($tenant.Name), but the tenant is not assigned to $($project.Name), excluding the tenant from this project's report." continue } $newPermission.Tenants += @{ Id = $tenant.Id Name = $tenant.Name } } if ($scopedRole.TenantIds.Length -gt 0 -and $newPermission.Tenants.Length -le 0) { Write-OctopusVerbose "The role is scoped to tenants, but none of the tenants are assigned to $($project.Name). This user role does not apply to this project." return @($projectPermissionList) } } $existingPermission = $projectPermissionList | Where-Object { $_.UserId -eq $newPermission.UserId } if ($null -eq $existingPermission) { Write-OctopusVerbose "This is the first time we've seen $($user.DisplayName) for this permission. Adding the permission to the list." $projectPermissionList += $newPermission return @($projectPermissionList) } if ($existingPermission.Environments.Length -eq 0 -and $existingPermission.Tenants.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has no scoping for environments or tenants for this project, they have the highest level, no need to improve it." return @($projectPermissionList) } if ($existingPermission.Environments.Length -gt 0 -and $newPermission.Environments.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has scoping to environments, but the new permission doesn't have any environment scoping, removing the scoping" $existingPermission.Environments = @() } elseif ($existingPermission.Environments.Length -gt 0 -and $newPermission.Environments.Length -gt 0) { foreach ($item in $newPermission.Environments) { $existingItem = $existingPermission.Environments | Where-Object { $_.Id -eq $item.Id } if ($null -eq $existingItem) { Write-OctopusVerbose "$($user.DisplayName) is not yet scoped to the environment $($item.Name), adding it." $existingPermission.Environments += $item } } } if ($existingPermission.Tenants.Length -gt 0 -and $newPermission.Tenants.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has scoping to tenants, but the new permission doesn't have any tenant scoping, removing the scoping" $existingPermission.Tenants = @() } elseif ($existingPermission.Tenants.Length -gt 0 -and $newPermission.Tenants.Length -gt 0) { foreach ($item in $newPermission.Tenants) { $existingItem = $existingPermission.Tenants | Where-Object { $_.Id -eq $item.Id } if ($null -eq $existingItem) { Write-OctopusVerbose "$($user.DisplayName) is not yet scoped to the tenant $($item.Name), adding it." $existingPermission.Tenants += $item } } } return @($projectPermissionList) } function Write-PermissionList { param ( $permissionName, $permissionList, $permission, $reportPath ) foreach ($permissionScope in $permissionList) { $permissionForCSV = @{ Space = $permission.SpaceName Project = $permission.Name PermissionName = $permissionName User = $permissionScope.DisplayName EnvironmentScope = "" TenantScope = "" } if ($permissionScope.IncludeScope -eq $false) { $permissionForCSV.EnvironmentScope = "N/A" $permissionForCSV.TenantScope = "N/A" } else { if ($permissionScope.Environments.Length -eq 0) { $permissionForCSV.EnvironmentScope = "All" } else { $permissionForCSV.EnvironmentScope = $($permissionScope.Environments.Name) -join ";" } if ($permissionScope.TenantScope.Length -eq 0) { $permissionForCSV.TenantScope = "All" } else { $permissionForCSV.TenantScope = $($permissionScope.Tenants.Name) -join ";" } } $permissionAsString = """$($permissionForCSV.Space)"",""$($permissionForCSV.Project)"",""$($permissionForCSV.PermissionName)"",""$($permissionForCSV.User)"",""$($permissionForCSV.EnvironmentScope)"",""$($permissionForCSV.TenantScope)""" Add-Content -Path $reportPath -Value $permissionAsString } } function Get-EnvironmentsScopedToProject { param ( $project, $octopusApiKey, $octopusUrl, $spaceId ) $scopedEnvironmentList = @() $projectChannels = $repositoryForSpace.Projects.GetAllChannels($project) foreach ($channel in $projectChannels) { $lifecycleId = $channel.LifecycleId if ($null -eq $lifecycleId) { $lifecycleId = $project.LifecycleId } $lifecyclePreview = $repositoryForSpace.Lifecycles.Get($lifeCycleId) if (($null -eq $lifecyclePreview.Phases) -or ($lifecyclePreview.Phases.Count -eq 0)) { # Lifecycle has no defined phases and uses environments, manually create the phases foreach ($environment in $repositoryForSpace.Environments.GetAll()) { $phase = New-Object Octopus.Client.Model.PhaseResource $phase.Name = $environment.Name $phase.OptionalDeploymentTargets.Add($environment.Id) $lifecyclePreview.Phases.Add($phase) } } foreach ($phase in $lifecyclePreview.Phases) { foreach ($environmentId in $phase.AutomaticDeploymentTargets) { if ($scopedEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "Adding $environmentId to $($project.Name) environment list" $scopedEnvironmentList += $environmentId } } foreach ($environmentId in $phase.OptionalDeploymentTargets) { if ($scopedEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "Adding $environmentId to $($project.Name) environment list" $scopedEnvironmentList += $environmentId } } } } return $scopedEnvironmentList } function New-OctopusFilteredList { param( $itemList, $itemType, $filters ) $filteredList = @() Write-OctopusSuccess "Creating filter list for $itemType with a filter of $filters" if ([string]::IsNullOrWhiteSpace($filters) -eq $false -and $null -ne $itemList) { $splitFilters = $filters -split "," foreach($item in $itemList) { foreach ($filter in $splitFilters) { Write-OctopusVerbose "Checking to see if $filter matches $($item.Name)" if ([string]::IsNullOrWhiteSpace($filter)) { continue } if (($filter).ToLower() -eq "all") { Write-OctopusVerbose "The filter is 'all' -> adding $($item.Name) to $itemType filtered list" $filteredList += $item } elseif ((Test-OctopusObjectHasProperty -propertyName "Name" -objectToTest $item) -and $item.Name -like $filter) { Write-OctopusVerbose "The filter $filter matches $($item.Name), adding $($item.Name) to $itemType filtered list" $filteredList += $item } elseif ((Test-OctopusObjectHasProperty -propertyName "DisplayName" -objectToTest $item) -and $item.DisplayName -like $filter) { Write-OctopusVerbose "The filter $filter matches $($item.DisplayName), adding $($item.DisplayName) to $itemType filtered list" $filteredList += $item } else { Write-OctopusVerbose "The item $($item.Name) does not match filter $filter" } } } } else { Write-OctopusWarning "The filter for $itemType was not set." } return $filteredList } $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) $spaceList = $repository.Spaces.GetAll() $spaceList = New-OctopusFilteredList -itemType "Spaces" -itemList $spaceList -filters $spaceFilter $userRolesList = $repository.UserRoles.GetAll() $userList = $repository.Users.GetAll() $permissionsReport = @() foreach ($spaceName in $spaceList) { $space = $repository.Spaces.FindByName($spaceName.Name) $repositoryForSpace = $client.ForSpace($space) $projectList = $repositoryForSpace.Projects.GetAll() $environmentList = $repositoryForSpace.Environments.GetAll() $environmentList = New-OctopusFilteredList -itemType "Environments" -itemList $environmentList -filters $environmentFilter $tenantList = $repositoryForSpace.Tenants.GetAll() foreach ($project in $projectList) { $projectPermission = @{ Name = $project.Name SpaceName = $space.Name Permissions = @() } $projectEnvironmentList = @(Get-EnvironmentsScopedToProject -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl -spaceId $space.Id -project $project) foreach ($user in $userList) { $allTeamList = $repository.UserTeams.Get($user) $userTeamList = @() foreach ($teamItem in $allTeamList) { $team = $repository.Teams.Get($teamItem.Id) if (([string]::IsNullOrWhiteSpace($team.SpaceId)) -or ($team.SpaceId -eq $space.Id)) { $userTeamList += $teamItem } } foreach ($userTeam in $userTeamList) { $team = $repository.Teams.Get($userTeam.Id) $scopedRolesList = $repository.Teams.GetScopedUserRoles($team) foreach ($scopedRole in $scopedRolesList) { if ($scopedRole.SpaceId -ne $space.Id) { Write-OctopusVerbose "The scoped role is not for the current space, moving on to next role." continue } if ($scopedRole.ProjectIds.Count -gt 0 -and $scopedRole.ProjectIds -notcontains $project.Id -and $scopedRole.ProjectGroupIds.Count -eq 0) { Write-OctopusVerbose "The scoped role is associated with projects, but not $($project.Name), moving on to next role." continue } if ($scopedRole.ProjectGroupIds.Count -gt 0 -and $scopedRole.ProjectGroupIds -notcontains $project.ProjectGroupId -and $scopedRole.ProjectIds.Count -eq 0) { Write-OctopusVerbose "The scoped role is associated with projects groups, but not the one for $($project.Name), moving on to next role." continue } $userRole = $userRolesList | Where-Object {$_.Id -eq $scopedRole.UserRoleId} $projectPermission.Permissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.Permissions -permissionToCheck $permissionToCheck -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) } } } $permissionsReport += $projectPermission } } if (Test-Path $reportPath) { Remove-Item $reportPath } New-Item $reportPath -ItemType File Add-Content -Path $reportPath -Value "Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping" foreach ($permission in $permissionsReport) { Write-PermissionList -permissionName $permissionToCheck -permissionList $permission.Permissions -permission $permission -reportPath $reportPath } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; using System.Text.RegularExpressions; class ProjectPermission { private System.Collections.Generic.List _permissions = new System.Collections.Generic.List(); public string Name { get; set; } public string SpaceName { get; set; } public System.Collections.Generic.List Permissions { get { return _permissions; } set { _permissions = value; } } } class Permission { private System.Collections.Generic.List _environments = new System.Collections.Generic.List(); private System.Collections.Generic.List _tenants = new System.Collections.Generic.List(); public string DisplayName { get; set; } public string UserId { get; set; } public System.Collections.Generic.List Environments { get { return _environments; } set { _environments = value; } } public System.Collections.Generic.List Tenants { get { return _tenants; } set { _tenants = value; } } public bool IncludeScope { get; set; } } static System.Collections.Generic.List FilterUserList(System.Collections.Generic.List Users, string Filter) { var filters = Filter.Split(",", StringSplitOptions.RemoveEmptyEntries); System.Collections.Generic.List filteredList = new System.Collections.Generic.List(); foreach (var userResource in Users) { // Loop through filters foreach (string filter in filters) { if (filter.ToLower() == "all") { // Add to list Console.WriteLine(string.Format("The filter is 'all' -> adding {0} to filtered list", userResource.Name)); filteredList.Add(userResource); } else if (Regex.IsMatch(userResource.Name, filter)) { Console.WriteLine(string.Format("The filter {0} matches {1}, adding {1} to filtered list", filter, userResource.Name)); filteredList.Add(userResource); } else { Console.WriteLine(string.Format("User {0} does not match filter {1}", userResource.Name, filter)); } } } return filteredList; } static System.Collections.Generic.List FilterSpaceList(System.Collections.Generic.List Spaces, string Filter) { var filters = Filter.Split(",", StringSplitOptions.RemoveEmptyEntries); System.Collections.Generic.List filteredList = new System.Collections.Generic.List(); foreach (var spaceResource in Spaces) { // Loop through filters foreach (string filter in filters) { if (filter.ToLower() == "all") { // Add to list Console.WriteLine(string.Format("The filter is 'all' -> adding {0} to filtered list", spaceResource.Name)); filteredList.Add(spaceResource); } else if (Regex.IsMatch(spaceResource.Name, filter)) { Console.WriteLine(string.Format("The filter {0} matches {1}, adding {1} to filtered list", filter, spaceResource.Name)); filteredList.Add(spaceResource); } else { Console.WriteLine(string.Format("Space {0} does not match filter {1}", spaceResource.Name, filter)); } } } return filteredList; } static System.Collections.Generic.List FilterEnvironmentList(System.Collections.Generic.List Environments, string Filter) { var filters = Filter.Split(",", StringSplitOptions.RemoveEmptyEntries); System.Collections.Generic.List filteredList = new System.Collections.Generic.List(); foreach (var environmentResource in Environments) { // Loop through filters foreach (string filter in filters) { if (filter.ToLower() == "all") { // Add to list Console.WriteLine(string.Format("The filter is 'all' -> adding {0} to filtered list", environmentResource.Name)); filteredList.Add(environmentResource); } else if (Regex.IsMatch(environmentResource.Name, filter)) { Console.WriteLine(string.Format("The filter {0} matches {1}, adding {1} to filtered list", filter, environmentResource.Name)); filteredList.Add(environmentResource); } else { Console.WriteLine(string.Format("Space {0} does not match filter {1}", environmentResource.Name, filter)); } } } return filteredList; } static System.Collections.Generic.List GetEnvironmentsScopedToProject (ProjectResource Project, SpaceResource Space, IOctopusSpaceRepository RepositoryForSpace) { System.Collections.Generic.List scopedEnvironments = new System.Collections.Generic.List(); var projectChannels = RepositoryForSpace.Projects.GetAllChannels(Project); foreach (var channel in projectChannels) { string lifeCycleId = string.Empty; if (string.IsNullOrEmpty(channel.LifecycleId)) { lifeCycleId = Project.LifecycleId; } var lifecyclePreview = RepositoryForSpace.Lifecycles.Get(lifeCycleId); if ((lifecyclePreview.Phases == null) || (lifecyclePreview.Phases.Count == 0)) { foreach (var environment in RepositoryForSpace.Environments.GetAll()) { PhaseResource phase = new PhaseResource(); phase.Name = environment.Name; phase.OptionalDeploymentTargets.Add(environment.Id); lifecyclePreview.Phases.Add(phase); } } foreach (var phase in lifecyclePreview.Phases) { foreach (var environmentId in phase.AutomaticDeploymentTargets) { if (!scopedEnvironments.Contains(environmentId)) { Console.WriteLine(string.Format("Adding {0} to {1} environment list", environmentId, Project.Name)); scopedEnvironments.Add(environmentId); } } foreach (var environmentId in phase.OptionalDeploymentTargets) { if (!scopedEnvironments.Contains(environmentId)) { Console.WriteLine(string.Format("Adding {0} to {1} environment list", environmentId, Project.Name)); scopedEnvironments.Add(environmentId); } } } } return scopedEnvironments; } static System.Collections.Generic.List GetUserPermission (SpaceResource Space, ProjectResource Project, UserRoleResource UserRole, System.Collections.Generic.List ProjectPermissionList, string PermissionToCheck, System.Collections.Generic.List EnvironmentList, System.Collections.Generic.List TenantList, UserResource User, ScopedUserRoleResource ScopedRole, bool IncludeScope, System.Collections.Generic.List ProjectEnvironmentList) { Octopus.Client.Model.Permission octopusPermission = (Octopus.Client.Model.Permission)Enum.Parse(typeof(Octopus.Client.Model.Permission), PermissionToCheck); if (!UserRole.GrantedSpacePermissions.Contains(octopusPermission)) { return ProjectPermissionList; } var newPermission = new Permission(); newPermission.DisplayName = User.DisplayName; newPermission.UserId = User.Id; newPermission.IncludeScope = IncludeScope; if (IncludeScope) { foreach (var environmentId in ScopedRole.EnvironmentIds) { if(!ProjectEnvironmentList.Contains(environmentId)) { Console.WriteLine(string.Format("The role is scoped to environment {0}, but the environment is not assigned to project {1}, excluding from this project report", environmentId, Project.Name)); continue; } var environment = EnvironmentList.FirstOrDefault(e => e.Id == environmentId); if (environment != null) { newPermission.Environments.Add(environment); } } if (ScopedRole.EnvironmentIds.Count > 0 and newPermission.Environments.Count <= 0) { Console.WriteLine(string.Format("The role is scoped to environments, but none of the environments are assigned to {0}. This user role does not apply to this project.", Project.Name)); return ProjectPermissionList; } foreach (var tenantId in ScopedRole.TenantIds) { var tenant = TenantList.FirstOrDefault(t => t.Id == tenantId); if (tenant != null) { if (!tenant.ProjectEnvironments.ContainsKey(Project.Id)) { Console.WriteLine(string.Format("The role is scoped to tenant {0}, but the tenant is not assigned to {1}, excluding the tenant from this report", tenant.Name, Project.Name)); continue; } newPermission.Tenants.Add(tenant); } } if (ScopedRole.TenantIds.Count > 0 and newPermission.Tenants.Count <= 0) { Console.WriteLine(string.Format("The role is scoped to tenants, but none of the tenants are assigned to {0}. This user role does not apply to this project.", Project.Name)); return ProjectPermissionList; } } var existingPermission = ProjectPermissionList.FirstOrDefault(p => p.UserId == newPermission.UserId); if (existingPermission == null) { Console.WriteLine(string.Format("This is the first time we've seen {0} for this permission. Adding the permission to the list.", User.DisplayName)); ProjectPermissionList.Add(newPermission); return ProjectPermissionList; } if (existingPermission.Environments.Count == 0 && existingPermission.Tenants.Count == 0) { Console.WriteLine(string.Format("{0} has not scoping for environments or tenants for this project, they have the highest permission, no need to improve it", User.DisplayName)); return ProjectPermissionList; } if (existingPermission.Environments.Count > 0 && newPermission.Environments.Count == 0) { Console.WriteLine(string.Format("{0} has scoping to environments, but the new permission doesn't have any environment scoping, removing the scoping", User.DisplayName)); existingPermission.Environments = new System.Collections.Generic.List(); } else if (existingPermission.Environments.Count > 0 && newPermission.Environments.Count > 0) { foreach (var item in newPermission.Environments) { var existingItem = existingPermission.Environments.FirstOrDefault(e => e.Id == item.Id); if (existingItem == null) { Console.WriteLine(string.Format("{0} is not yet scoped to the environment {1}, adding it", User.DisplayName, item.Name)); existingPermission.Environments.Add(item); } } } if (existingPermission.Tenants.Count > 0 && newPermission.Tenants.Count == 0) { Console.WriteLine(string.Format("{0} has scoping to tenants, but the new permission doesn't have any tenant scoping, removing the scoping", User.DisplayName)); existingPermission.Tenants = new System.Collections.Generic.List(); } else if (existingPermission.Tenants.Count > 0 && newPermission.Tenants.Count > 0) { foreach (var item in newPermission.Tenants) { Console.WriteLine(string.Format("{0} is not yet scoped to the tenant {1}, adding it", User.DisplayName, item.Name)); existingPermission.Tenants.Add(item); } } return ProjectPermissionList; } static void WritePermissionList (string PermissionName, System.Collections.Generic.List ProjectPermissionList, ProjectPermission Permission, string ReportPath) { using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(ReportPath, true)) { //csvFile.WriteLine("Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping"); // 0 1 2 3 4 5 foreach (var permissionScope in ProjectPermissionList) { System.Collections.Generic.List row = new System.Collections.Generic.List(); row.Add(Permission.SpaceName); row.Add(Permission.Name); row.Add(PermissionName); row.Add(permissionScope.DisplayName); if (permissionScope.IncludeScope == false) { row.Add("N/A"); row.Add("N/A"); } else { if (permissionScope.Environments.Count == 0) { row.Add("All"); } else { row.Add(string.Join(";", permissionScope.Environments.Select(e => e.Name))); } if (permissionScope.Tenants.Count == 0) { row.Add("All"); } else { row.Add(string.Join(";", permissionScope.Tenants.Select(t => t.Name))); } } csvFile.WriteLine(string.Format("{0},{1},{2},{3},{4},{5}", row.ToArray())); } } } var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceFilter = "all"; string environmentFilter = "Development"; string permissionToCheck = "DeploymentCreate"; string reportPath = "path:\\to\\csv.file"; string userFilter = "all"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); var spaceList = repository.Spaces.FindAll(); spaceList = FilterSpaceList(spaceList, spaceFilter); var userRoleList = repository.UserRoles.FindAll(); var userList = repository.Users.FindAll(); userList = FilterUserList(userList, userFilter); var permissionsReport = new System.Collections.Generic.List(); foreach (var spaceName in spaceList) { var space = repository.Spaces.FindByName(spaceName.Name); var repositoryForSpace = client.ForSpace(space); var projectList = repositoryForSpace.Projects.GetAll(); var environmentList = repositoryForSpace.Environments.GetAll(); environmentList = FilterEnvironmentList(environmentList, environmentFilter); var tenantList = repositoryForSpace.Tenants.GetAll(); foreach (var project in projectList) { var projectPermission = new ProjectPermission(); projectPermission.Name = project.Name; projectPermission.SpaceName = space.Name; var projectEnvironmentList = GetEnvironmentsScopedToProject(project, space, repositoryForSpace); foreach (var user in userList) { var allTeamList = repository.UserTeams.Get(user); var userTeamList = new System.Collections.Generic.List(); foreach (var teamItem in allTeamList) { var team = repository.Teams.Get(teamItem.Id); if ((string.IsNullOrEmpty(team.SpaceId)) || (team.SpaceId == space.Id)) { userTeamList.Add(teamItem); } } foreach (var userTeam in userTeamList) { var team = repository.Teams.Get(userTeam.Id); var scopedUserRolesList = repository.Teams.GetScopedUserRoles(team); foreach (var scopedRole in scopedUserRolesList) { if (scopedRole.SpaceId != space.Id) { Console.WriteLine("The scoped role is not for the current space, moving on to next role"); continue; } if ((scopedRole.ProjectIds.Count > 0) && (!scopedRole.ProjectIds.Contains(project.Id) && scopedRole.ProjectGroupIds.Count == 0)) { Console.WriteLine(string.Format("The scoped role is associates with projects, but not {0}, moving on to next role", project.Name)); continue; } if ((scopedRole.ProjectGroupIds.Count > 0) && (!scopedRole.ProjectGroupIds.Contains(project.ProjectGroupId) && scopedRole.ProjectIds.Count == 0)) { Console.WriteLine(string.Format("The scoped role is associated with project groups, but not the one for {0}, moving on to next role", project.Name)); continue; } var userRole = userRoleList.FirstOrDefault(r => r.Id == scopedRole.UserRoleId); projectPermission.Permissions = GetUserPermission(space, project, userRole, projectPermission.Permissions, permissionToCheck, environmentList, tenantList, user, scopedRole, true, projectEnvironmentList); } } } permissionsReport.Add(projectPermission); } } if (!System.IO.Directory.Exists(System.IO.Path.GetDirectoryName(reportPath))) { string directoryPath = System.IO.Path.GetDirectoryName(reportPath); System.IO.Directory.CreateDirectory(directoryPath); } if (System.IO.File.Exists(reportPath)) { System.IO.File.Delete(reportPath); } using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(reportPath)) { csvFile.WriteLine("Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping"); } foreach (var permission in permissionsReport) { WritePermissionList(permissionToCheck, permission.Permissions, permission, reportPath); } ```
Python3 ```python import json import requests from requests.api import get, head import re import os def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items def new_octopus_filtered_list(item_list, filters): filtered_list = [] # Split string filter_list = filters.split(',') for item in item_list: for filter in filter_list: print ('Checking to see if {0} matches {1}'.format(filter, item['Name'])) if filter == 'all': print ('The filter is all -> adding {0} to filtered list'.format(item['Name'])) filtered_list.append(item) elif re.match(filter, item['Name'], re.IGNORECASE) != None: print ('The filter {0} matches {1}, adding {1} to filtered list'.format(filter, item['Name'])) filtered_list.append(item) else: print ('The item {0} does not match filter {1}'.format(item['Name'], filter)) return filtered_list def get_environments_scoped_to_project (octopus_server_uri, headers, project, space): scoped_environment_list = [] # Get project channels uri = '{0}/api/{1}/projects/{2}/channels'.format(octopus_server_uri, space['Id'], project['Id']) channels = get_octopus_resource(uri, headers) # Loop through channels for channel in channels: lifecycleId = channel['LifecycleId'] if None == lifecycleId: # Channel inherits lifecycle from project lifecycleId = project['LifecycleId'] # Get lifecycle preview - using the preview returns implied phases if the lifecycle doesn't have any phases defined uri = '{0}/api/{1}/lifecycles/{2}/preview'.format(octopus_server_uri, space['Id'], lifecycleId) lifecycle_preview = get_octopus_resource(uri, headers) # Loop through phases for phase in lifecycle_preview['Phases']: for environmentId in phase['AutomaticDeploymentTargets']: if environmentId not in scoped_environment_list: print ('Adding {0} to {1} environment list'.format(environmentId, project['Name'])) scoped_environment_list.append(environmentId) for environmentId in phase['OptionalDeploymentTargets']: if environmentId not in scoped_environment_list: print ('Adding {0} to {1} environment list'.format(environmentId, project['Name'])) scoped_environment_list.append(environmentId) return scoped_environment_list def get_user_permission (space, project, user_role, project_permission_list, permission_to_check, environment_list, tenant_list, user, scoped_role, include_scope, project_environment_list): if permission_to_check not in user_role['GrantedSpacePermissions']: return project_permission_list new_permission = { 'DisplayName': user['DisplayName'], 'UserId': user['Id'], 'Environments': [], 'Tenants': [], 'IncludeScope': include_scope } if include_scope == True: for environmentId in scoped_role['EnvironmentIds']: if environmentId not in project_environment_list: print ('The role is scoped to environment {0}, but the environment is not assigned to {1}, excluding from report'.format(environmentId, project['Name'])) continue environment = next((x for x in environment_list if x['Id'] == environmentId), None) if environment != None: new_permission['Environments'] += [{ 'Id': environment['Id'], 'Name': environment['Name'] }] if len(scoped_role['EnvironmentIds']) > 0 and len(new_permission['Environments']) <= 0 print ('The role is scoped to environments, but none of the environments are assigned to {0}. This user role does not apply to this project.'.format(project['Name'])) return project_permission_list for tenantId in scoped_role['TenantIds']: tenant = next((x for x in tenant_list if x['Id'] == tenantId), None) new_permission['Tenants'] += [{ 'Id': tenant['Id'], 'Name': tenant['Name'] }] if len(scoped_role['TenantIds']) > 0 and len(new_permission['Tenants']) <= 0 print('The role is scoped to tenants, but none of the tenants are assigned to the {0}. This user does not apply to this project.'.format(project['Name'])) return project_permission_list existing_permission = next((x for x in project_permission_list if x['UserId'] == new_permission['UserId']), None) if existing_permission == None: print ('This is the first time we''ve seen {0} for this permission. Adding the permission to the list'.format(user['DisplayName'])) project_permission_list.append(new_permission) return project_permission_list if len(existing_permission['Environments']) == 0 and len(existing_permission['Tenants']) == 0: print ('{0} has no scoping for environments or tenants for this project, they have the highest level, no need to improve it.'.format(user['DisplayName'])) return project_permission_list if len(existing_permission['Environments']) > 0 and len(new_permission['Environments']) == 0: print('{0} has scoping to environments, but the new permission does not have any environment scoping, removing the scoping'.format(user['DisplayName'])) existing_permission['Environments'] = [] elif len(existing_permission['Environments']) > 0 and len(new_permission['Environments']) > 0: for item in new_permission['Environments']: existing_item = next((x for x in existing_permission['Environments'] if x['Id'] == item['Id']), None) if existing_item == None: print ('{0} is not yet scoped to the environment {1}, adding it'.format(user['DisplayName'], item['Name'])) existing_permission['Environments'] += item if len(existing_permission['Tenants']) > 0 and len(new_permission['Tenants']) == 0: print ('{0} has scoping to tenants, but the new permission does not have any tenant scoping, removing the scoping') existing_permission['Tenants'] = [] elif len(existing_permission['Tenants']) > 0 and len(new_permission['Tenants']) > 0: for item in new_permission['Tenants']: existing_item = next((x for x in existing_permission['Tenants'] if x['Id'] == item['Id']), None) if existing_item == None: print ('{0} is not yet scoped to the tenant {1}, adding it.'.format(user['DisplayName'], item['Name'])) existing_permission['Tenants'] += item return project_permission_list def write_permission_list (permission_name, permission_list, permission, report_path): for permission_scope in permission_list: row = { 'Space': permission['SpaceName'], 'Project': permission['Name'], 'PermissionName': permission_name, 'User': permission_scope['DisplayName'], 'EnvironmentScope': '', 'TenantScope':'' } if permission_scope['IncludeScope'] == False: row['EnvironmentScope'] = "N/A" row['TenantScope'] = "N/A" else: if len(permission_scope['Environments']) == 0: row['EnvironmentScope'] = "All" else: scopedList = "" for scopedEnvironment in permission_scope['Environments']: scopedList += scopedEnvironment['Name'] + ';' row['EnvironmentScope'] = scopedList if len(permission_scope['Tenants']) == 0: row['TenantScope'] = "All" else: scopedList = "" for scopedTenant in permission_scope['Tenants']: scopedList += scopedTenant['Name'] + ';' row['TenantScope'] = scopedList report = open(report_path, 'a') report.write('\n'.join(["{0},{1},{2},{3},{4},{5}".format(row['Space'], row['Project'], row['PermissionName'], row['User'], row['EnvironmentScope'], row['TenantScope'])]) + '\n') report.close() octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_filter = "all" environment_filter = "Development" permission_to_check = "DeploymentCreate" report_path = "path:\\to\\report.file" user_filter = "all" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly # Get spaces uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) spaces = new_octopus_filtered_list(spaces, space_filter) # Get user role list uri = '{0}/api/userroles'.format(octopus_server_uri) user_roles_list = get_octopus_resource(uri, headers) # Get user list uri = '{0}/api/users'.format(octopus_server_uri) user_list = get_octopus_resource(uri, headers) user_list = new_octopus_filtered_list(user_list, user_filter) permissions_report = [] for space in spaces: # Get project list uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) # Get environments uri = '{0}/api/{1}/environments'.format(octopus_server_uri, space['Id']) environments = get_octopus_resource(uri, headers) environments = new_octopus_filtered_list(environments, environment_filter) # Get tenants uri = '{0}/api/{1}/tenants'.format(octopus_server_uri, space['Id']) tenants = get_octopus_resource(uri, headers) # Loop through projects for project in projects: # Create permission hash table project_permission = { 'Name': project['Name'], 'SpaceName': space['Name'], 'Permissions': [] } # Get environments scoped to project project_environment_list = get_environments_scoped_to_project(octopus_server_uri, headers, project, space) # Loop through users for user in user_list: # Get user team list uri = '{0}/api/users/{1}/teams?spaces={2}&includeSystem=True'.format(octopus_server_uri, user['Id'], space['Id']) user_team_list = get_octopus_resource(uri, headers) for user_team in user_team_list: # Get the scoped roles uri = '{0}/api/teams/{1}/scopeduserroles'.format(octopus_server_uri, user_team['Id']) scoped_roles_list = get_octopus_resource(uri, headers) for scoped_role in scoped_roles_list: if scoped_role['SpaceId'] != space['Id']: print ('The scoped role is not for the current space, moving on to next role') continue if len(scoped_role['ProjectIds']) > 0 and project['Id'] not in scoped_role['ProjectIds'] and len(scoped_role['ProjectGroupIds']) == 0: print ('The scoped role is associated with projects, but not {0}, moving on to next role'.format(project['Name'])) continue if len(scoped_role['ProjectGroupIds']) > 0 and project['ProjectGroupId'] not in scoped_role['ProjectGroupIds'] and len(scoped_role['ProjectIds']) == 0: print ('The scoped role is associated with project groups, but not one for {0}, moving on to next role'.format(project['Name'])) user_role = next((x for x in user_roles_list if x['Id'] == scoped_role['UserRoleId']), None) project_permission['Permissions'] = get_user_permission(space, project, user_role, project_permission['Permissions'], permission_to_check, environments, tenants, user, scoped_role, True, project_environment_list) permissions_report.append(project_permission) if os.path.exists(report_path): os.remove(report_path) # Write header report = open(report_path, 'w') report.write('\n'.join(["Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping"]) + '\n') report.close() # Loop through the report for permission in permissions_report: write_permission_list(permission_to_check, permission['Permissions'], permission, report_path) ```
Go ```go package main import ( "bufio" "fmt" "log" "net/url" "os" "regexp" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type ProjectPermission struct { Name string SpaceName string Permissions []Permission } type Permission struct { DisplayName string UserId string Environments []PermissionEnvironment Tenants []PermissionTenant IncludeScope bool } type PermissionEnvironment struct { Id string Name string } type PermissionTenant struct { Id string Name string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceFilter := "all" environmentFilter := "Development" permissionToCheck := "DeploymentCreate" reportPath := "path:\\to\\Report.csv" userFilter := "all" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get spaces spaces, err := client.Spaces.GetAll() if err != nil { log.Println(err) } // Filter spaces spaces = FilterSpaces(spaces, spaceFilter) // Get all user roles userRoles, err := client.UserRoles.GetAll() if err != nil { log.Println(err) } // Get all users users, err := client.Users.GetAll() if err != nil { log.Println(err) } // Filter users users = FilterUsers(users, userFilter) permissionsReport := []ProjectPermission{} // Loop through spaces for s := 0; s < len(spaces); s++ { spaceClient := octopusAuth(apiURL, APIKey, spaces[s].ID) // Get projects for space projects, err := spaceClient.Projects.GetAll() if err != nil { log.Println(err) } // Get environments for space environments, err := spaceClient.Environments.GetAll() if err != nil { log.Println(err) } environments = FilterEnvironments(environments, environmentFilter) // Get tenants for space tenants, err := spaceClient.Tenants.GetAll() if err != nil { log.Println(err) } // Loop through projects for p := 0; p < len(projects); p++ { // Create new permission object projectPermission := ProjectPermission{ Name: projects[p].Name, SpaceName: spaces[s].Name, } // Get environment scoped to project projectEnvironmentList := GetEnvironmentsScopedToProject(spaceClient, projects[p], spaces[s]) // Loop through users for u := 0; u < len(users); u++ { // Get user team list userTeams, err := spaceClient.Users.GetTeams(users[u]) if err != nil { log.Println(err) } for t := 0; t < len(*userTeams); t++ { userTeam := *userTeams scopedRolesList, err := client.Teams.GetScopedUserRolesByID(userTeam[t].ID) if err != nil { log.Println(err) } // Loop through scoped roles for r := 0; r < len(scopedRolesList.Items); r++ { if scopedRolesList.Items[r].SpaceID != spaces[s].ID { fmt.Println("The scoped role is not for the current space, moving on to next role.") continue } if len(scopedRolesList.Items[r].ProjectIDs) > 0 && !contains(scopedRolesList.Items[r].ProjectIDs, projects[p].ID) && len(scopedRolesList.Items[r].ProjectGroupIDs) == 0 { fmt.Println("The scoped role is associated with projects, but not " + projects[p].Name) continue } if len(scopedRolesList.Items[r].ProjectGroupIDs) > 0 && !contains(scopedRolesList.Items[r].ProjectGroupIDs, projects[p].ProjectGroupID) && len(scopedRolesList.Items[r].ProjectIDs) == 0 { fmt.Println("The scoped role is associated with project groups but not " + projects[p].Name) } userRole := GetUserRole(userRoles, scopedRolesList.Items[r].UserRoleID) projectPermission.Permissions = GetUserPermission(spaces[s], projects[p], userRole, projectPermission.Permissions, permissionToCheck, environments, tenants, users[u], scopedRolesList.Items[r], true, projectEnvironmentList) } } } // Add to report permissionsReport = append(permissionsReport, projectPermission) } } if FileExists(reportPath) { os.Remove(reportPath) } // Write report header file, err := os.OpenFile(reportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) dataWriter.WriteString("Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping" + "\n") dataWriter.Flush() file.Close() for i := 0; i < len(permissionsReport); i++ { WritePermissionList(permissionToCheck, permissionsReport[i].Permissions, permissionsReport[i], reportPath) } } func GetEnvironmentsScopedToProject(client *octopusdeploy.Client, project *octopusdeploy.Project, space *octopusdeploy.Space) []string { scopedEnvironmentList := []string{} // Get channels for project channels := GetChannels(client, project) // Loop through channels for i := 0; i < len(channels); i++ { lifecycleId := channels[i].LifecycleID // Check for nil if lifecycleId == "" { // Channel inherits lifecycle from project lifecycleId = project.LifecycleID } // Get the lifecycle lifecycle, err := client.Lifecycles.GetByID(lifecycleId) if err != nil { log.Println(err) } // Check phases if (lifecycle.Phases == nil) || (len(lifecycle.Phases) == 0) { // There are no defined phases, create them manually from the environment list environments, err := client.Environments.GetAll() if err != nil { log.Println(err) } // Loop through environments and add phases for e := 0; e < len(environments); e++ { phase := octopusdeploy.Phase{} phase.OptionalDeploymentTargets = append(phase.OptionalDeploymentTargets, environments[e].ID) lifecycle.Phases = append(lifecycle.Phases, phase) } } // Loop through phases for p := 0; p < len(lifecycle.Phases); p++ { for e := 0; e < len(lifecycle.Phases[p].AutomaticDeploymentTargets); e++ { if !contains(scopedEnvironmentList, lifecycle.Phases[p].AutomaticDeploymentTargets[e]) { fmt.Println("Adding " + lifecycle.Phases[p].AutomaticDeploymentTargets[e] + " to " + project.Name + " environment list") scopedEnvironmentList = append(scopedEnvironmentList, lifecycle.Phases[p].AutomaticDeploymentTargets[e]) } } for e := 0; e < len(lifecycle.Phases[p].OptionalDeploymentTargets); e++ { if !contains(scopedEnvironmentList, lifecycle.Phases[p].OptionalDeploymentTargets[e]) { fmt.Println("Adding " + lifecycle.Phases[p].OptionalDeploymentTargets[e] + " to " + project.Name + " environment list") scopedEnvironmentList = append(scopedEnvironmentList, lifecycle.Phases[p].OptionalDeploymentTargets[e]) } } } } return scopedEnvironmentList } func GetUserRole(userRoles []*octopusdeploy.UserRole, userRoleId string) *octopusdeploy.UserRole { for i := 0; i < len(userRoles); i++ { if userRoles[i].ID == userRoleId { return userRoles[i] } } return nil } func GetChannels(client *octopusdeploy.Client, project *octopusdeploy.Project) []*octopusdeploy.Channel { channelQuery := octopusdeploy.ChannelsQuery{ Skip: 0, } results := []*octopusdeploy.Channel{} for true { // Call for results channels, err := client.Channels.Get(channelQuery) if err != nil { log.Println(err) } // Check returned number of items if len(channels.Items) == 0 { break } // append items to results results = append(results, channels.Items...) // Update query channelQuery.Skip += len(channels.Items) } return results } func FilterSpaces(spaces []*octopusdeploy.Space, filter string) []*octopusdeploy.Space { filteredList := []*octopusdeploy.Space{} // Split filter filters := strings.Split(filter, ",") for i := 0; i < len(spaces); i++ { for j := 0; j < len(filters); j++ { fmt.Println("Checking to see if " + filters[j] + " matches " + spaces[i].Name) match, err := regexp.MatchString(filter, spaces[i].Name) if err != nil { log.Println(err) } if filters[j] == "all" { fmt.Println("The filter is all -> adding " + spaces[i].Name + " to filtered list") filteredList = append(filteredList, spaces[i]) } else if match { fmt.Println("The filter " + filters[j] + " matches " + spaces[i].Name + " adding " + spaces[i].Name + " to filtered list") filteredList = append(filteredList, spaces[i]) } else { fmt.Println("The item " + spaces[i].Name + " does not match filter " + filters[j]) } } } return filteredList } func FilterUsers(users []*octopusdeploy.User, filter string) []*octopusdeploy.User { filteredList := []*octopusdeploy.User{} // Split filter filters := strings.Split(filter, ",") for i := 0; i < len(users); i++ { for j := 0; j < len(filters); j++ { fmt.Println("Checking to see if " + filters[j] + " matches " + users[i].Name) match, err := regexp.MatchString(filter, users[i].Name) if err != nil { log.Println(err) } if filters[j] == "all" { fmt.Println("The filter is all -> adding " + users[i].Name + " to filtered list") filteredList = append(filteredList, users[i]) } else if match { fmt.Println("The filter " + filters[j] + " matches " + users[i].Name + " adding " + users[i].Name + " to filtered list") filteredList = append(filteredList, users[i]) } else { fmt.Println("The item " + users[i].Name + " does not match filter " + filters[j]) } } } return filteredList } func FilterEnvironments(environments []*octopusdeploy.Environment, filter string) []*octopusdeploy.Environment { filteredList := []*octopusdeploy.Environment{} // Split filter filters := strings.Split(filter, ",") for i := 0; i < len(environments); i++ { for j := 0; j < len(filters); j++ { fmt.Println("Checking to see if " + filters[j] + " matches " + environments[i].Name) match, err := regexp.MatchString(filter, environments[i].Name) if err != nil { log.Println(err) } if filters[j] == "all" { fmt.Println("The filter is all -> adding " + environments[i].Name + " to filtered list") filteredList = append(filteredList, environments[i]) } else if match { fmt.Println("The filter " + filters[j] + " matches " + environments[i].Name + " adding " + environments[i].Name + " to filtered list") filteredList = append(filteredList, environments[i]) } else { fmt.Println("The item " + environments[i].Name + " does not match filter " + filters[j]) } } } return filteredList } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func contains(s []string, str string) bool { for _, v := range s { if v == str { return true } } return false } func GetUserPermission(space *octopusdeploy.Space, project *octopusdeploy.Project, userRole *octopusdeploy.UserRole, permissions []Permission, permissionToCheck string, environmentList []*octopusdeploy.Environment, tenantList []*octopusdeploy.Tenant, user *octopusdeploy.User, scopedRole *octopusdeploy.ScopedUserRole, includeScope bool, projectEnvironmentList []string) []Permission { if !contains(userRole.GrantedSpacePermissions, permissionToCheck) { return permissions } newPermission := Permission{ DisplayName: user.DisplayName, UserId: user.ID, IncludeScope: includeScope, } if includeScope { for i := 0; i < len(scopedRole.EnvironmentIDs); i++ { if !contains(projectEnvironmentList, scopedRole.EnvironmentIDs[i]) { fmt.Println("The role is scoped to environment " + scopedRole.EnvironmentIDs[i] + ", but the environment is not assigned to " + project.Name) continue } for j := 0; j < len(environmentList); j++ { if environmentList[j].ID == scopedRole.EnvironmentIDs[i] { newPermissionEnvironment := PermissionEnvironment{} newPermissionEnvironment.Id = environmentList[j].ID newPermissionEnvironment.Name = environmentList[j].Name newPermission.Environments = append(newPermission.Environments, newPermissionEnvironment) break } } } if len(scopedRole.EnvironmentIDs) > 0 && len(newPermission.Environments) == 0 { fmt.Println("The role is scoped to environments but the project is not assigned to them. This user role does not apply.") return permissions } for i := 0; i < len(scopedRole.TenantIDs); i++ { for j := 0; j < len(tenantList); j++ { if tenantList[j].ID == scopedRole.TenantIDs[i] { newPermissionTenant := PermissionTenant{} newPermissionTenant.Id = tenantList[j].ID newPermissionTenant.Name = tenantList[j].Name newPermission.Tenants = append(newPermission.Tenants, newPermissionTenant) break } } } if len(scopedRole.TenantIDs) > 0 && len(newPermission.Tenants) == 0 { fmt.Println("The role is scoped to tenants but the project is not assigned to them. This user role does not apply.") return permissions } } existingPermission := Permission{} permissionFound := false for i := 0; i < len(permissions); i++ { if permissions[i].UserId == newPermission.UserId { existingPermission = permissions[i] permissionFound = true break } } if !permissionFound { fmt.Println("This is the first time we've seen " + user.DisplayName + " for this permission. Adding the permission to the list.") permissions = append(permissions, newPermission) return permissions } if len(existingPermission.Environments) == 0 && len(existingPermission.Tenants) == 0 { fmt.Println(user.DisplayName + "has no scoping for environments or tenants for this project, they have the highest level, no need to improve it") return permissions } if len(existingPermission.Environments) > 0 && len(newPermission.Environments) == 0 { fmt.Println(user.DisplayName + " has scoping to environments, but the new permission does not have any environment scoping, removing the scoping") existingPermission.Environments = nil } else if len(existingPermission.Environments) > 0 && len(newPermission.Environments) > 0 { for i := 0; i < len(newPermission.Environments); i++ { itemFound := false for j := 0; j < len(existingPermission.Environments); j++ { if existingPermission.Environments[j].Id == newPermission.Environments[i].Id { itemFound = true break } } if !itemFound { fmt.Println(user.DisplayName + " is not yet scoped to the environment " + newPermission.Environments[i].Name + " adding it") existingPermission.Environments = append(existingPermission.Environments, newPermission.Environments[i]) } } } if len(existingPermission.Tenants) > 0 && len(newPermission.Tenants) == 0 { fmt.Println(user.DisplayName + " has scoping to tenants, but the new permission does not have any tenant scoping, removing the scoping") existingPermission.Tenants = nil } else if len(existingPermission.Tenants) > 0 && len(newPermission.Tenants) > 0 { for i := 0; i < len(newPermission.Tenants); i++ { itemFound := false for j := 0; j < len(existingPermission.Tenants); j++ { if existingPermission.Tenants[j].Id == newPermission.Tenants[i].Id { itemFound = true break } if !itemFound { fmt.Println(user.DisplayName + " is not yet scoped to the tenant " + newPermission.Tenants[i].Name + " adding it") existingPermission.Tenants = append(existingPermission.Tenants, newPermission.Tenants[i]) } } } } return permissions } func WritePermissionList(permissionName string, permissions []Permission, projectPermission ProjectPermission, reportPath string) { for i := 0; i < len(permissions); i++ { row := make(map[string]string) row["Space"] = projectPermission.SpaceName row["Project"] = projectPermission.Name row["PermissionName"] = permissionName row["User"] = permissions[i].DisplayName if !permissions[i].IncludeScope { row["EnvironmentScope"] = "N/A" row["TenantScope"] = "N/A" } else { if len(permissions[i].Environments) == 0 { row["EnvironmentScope"] = "All" } else { scopedList := "" for e := 0; e < len(permissions[i].Environments); e++ { scopedList += permissions[i].Environments[e].Name + ";" } row["EnvironmentScope"] = scopedList } if len(permissions[i].Tenants) == 0 { row["TenantScope"] = "All" } else { scopedList := "" for t := 0; t < len(permissions[i].Tenants); t++ { scopedList += permissions[i].Tenants[t].Name + ";" } row["TenantScope"] = scopedList } // Write report row file, err := os.OpenFile(reportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) //dataWriter.WriteString("Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping" + "\n") dataWriter.WriteString(row["Space"] + "," + row["Project"] + "," + row["PermissionName"] + "," + row["User"] + "," + row["EnvironmentScope"] + "," + row["TenantScope"] + "\n") dataWriter.Flush() file.Close() } } } func FileExists(filename string) bool { info, err := os.Stat(filename) if os.IsNotExist(err) { return false } return !info.IsDir() } ```
# Project permissions report Source: https://octopus.com/docs/octopus-rest-api/examples/reports/project-permissions-report.md The Octopus Web Portal provides the ability to see the permissions from a user's point of view. This script demonstrates how to generate a report of permissions from a project's point of view. :::figure ![Sample environment permissions report](/docs/img/octopus-rest-api/examples/reports/images/project-permissions-example.png) ::: **Please note:** The report is generated as a CSV file, formatting was added to the screenshot to make it easier to read. A user could be assigned to multiple teams with multiple role scopes. This script will determine the "most permissive" role scoping and display that. For example: - User A is assigned to Team B that has permissions to deploy to **Production**. - User A is assigned to Team C that has permissions to deploy to any environment. The report should show the user has permissions to deploy to any environment. The report will also combine environment and tenant scoping. For example: - User A is assigned to Team B that has permissions to deploy to **Production**. - User A is assigned to Team C that has permissions to deploy to **Staging**. The report should show the user has permissions to deploy to **Staging;Production**. Finally, if a user is scoped to an environment or tenant that is _not_ associated with the project then that scoping is excluded from the report. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Report Path - Space Filter - Project Filter - User Filter The filters allow you to choose which space(s), project(s), and user(s) to generate a report for. They all have the same features. - `all` will return the results for all spaces/projects/users. - Wildcard or `*` will return all spaces/projects/users matching the wildcard search. - Specific name will only show the exact matching spaces/projects/users. The filters support comma-separated entries. Setting the User Filter to `Test,Bob*` will find all users with the display name of `Test` or that start with `Bob`.
PowerShell (REST API) ```powershell $octopusUrl = "https://your-octopus-url" $octopusApiKey = "API-YOUR-KEY" # User associated with API key must be system manager or higher to view all users $reportPath = "./Report.csv" $spaceFilter = "All" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $projectFilter = "Hello World" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $userFilter = "bob.walker" # Supports "all" for everything, wild cards "hello*" will pull back everything that starts with hello, or specific names. Comma separated "Hello*,Testing" will pull back everything that starts with Hello and matches Testing exactly $cachedResults = @{} function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item, $ignoreCache ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose $body Write-OctopusInformation "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } if (($null -eq $ignoreCache -or $ignoreCache -eq $false) -and $method.ToUpper().Trim() -eq "GET") { Write-OctopusVerbose "Checking to see if $url is already in the cache" if ($cachedResults.ContainsKey($url) -eq $true) { Write-OctopusVerbose "$url is already in the cache, returning the result" return $cachedResults[$url] } } else { Write-OctopusVerbose "Ignoring cache." } Write-OctopusVerbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' if ($cachedResults.ContainsKey($url) -eq $true) { $cachedResults.Remove($url) } Write-OctopusVerbose "Adding $url to the cache" $cachedResults.add($url, $result) return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-OctopusCritical "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-OctopusCritical "Forbidden error returned from $url, please verify API key and try again" } else { Write-OctopusVerbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-OctopusVerbose $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details" } function Get-OctopusItemList { param( $itemType, $endpoint, $spaceId, $octopusUrl, $octopusApiKey ) if ($null -ne $spaceId) { Write-OctopusVerbose "Pulling back all the $itemType in $spaceId" } else { Write-OctopusVerbose "Pulling back all the $itemType for the entire instance" } if ($endPoint -match "\?+") { $endpointWithParams = "$($endPoint)&skip=0&take=10000" } else { $endpointWithParams = "$($endPoint)?skip=0&take=10000" } $itemList = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint $endpointWithParams -spaceId $spaceId -apiKey $octopusApiKey -method "GET" if ($itemList -is [array]) { Write-OctopusVerbose "Found $($itemList.Length) $itemType." return $itemList } else { Write-OctopusVerbose "Found $($itemList.Items.Length) $itemType." return $itemList.Items } } function Test-OctopusObjectHasProperty { param( $objectToTest, $propertyName ) $hasProperty = Get-Member -InputObject $objectToTest -Name $propertyName -MemberType Properties if ($hasProperty) { Write-OctopusVerbose "$propertyName property found." return $true } else { Write-OctopusVerbose "$propertyName property missing." return $false } } function Get-UserPermission { param ( $space, $project, $userRole, $projectPermissionList, $permissionToCheck, $environmentList, $tenantList, $user, $scopedRole, $includeScope, $projectEnvironmentList ) if ($userRole.GrantedSpacePermissions -notcontains $permissionToCheck) { return $projectPermissionList } $newPermission = @{ DisplayName = $user.DisplayName UserId = $user.Id Environments = @() Tenants = @() IncludeScope = $includeScope } if ($includeScope -eq $true) { foreach ($environmentId in $scopedRole.EnvironmentIds) { if ($projectEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "The role is scoped to environment $environmentId, but the environment is not assigned to $($project.Name), excluding from this project's report" continue } $environment = $environmentList | Where-Object { $_.Id -eq $environmentId } $newPermission.Environments += @{ Id = $environment.Id Name = $environment.Name } } foreach ($tenantId in $scopedRole.tenantIds) { $tenant = $tenantList | Where-Object { $_.Id -eq $tenantId } if ((Test-OctopusObjectHasProperty -objectToTest $tenant.ProjectEnvironments -propertyName $project.Id) -eq $false) { Write-OctopusVerbose "The role is scoped to tenant $($tenant.Name), but the tenant is not assigned to $($project.Name), excluding the tenant from this project's report." continue } $newPermission.Tenants += @{ Id = $tenant.Id Name = $tenant.Name } } } $existingPermission = $projectPermissionList | Where-Object { $_.UserId -eq $newPermission.UserId } if ($null -eq $existingPermission) { Write-OctopusVerbose "$($user.DisplayName) is not assigned to this project adding this permission" $projectPermissionList += $newPermission return @($projectPermissionList) } if ($existingPermission.Environments.Length -eq 0 -and $existingPermission.Tenants.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has no scoping for environments or tenants for this project, they have the highest level, no need to improve it." return @($projectPermissionList) } if ($existingPermission.Environments.Length -gt 0 -and $newPermission.Environments.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has scoping to environments, but the new permission doesn't have any environment scoping, removing the scoping" $existingPermission.Environments = @() } elseif ($existingPermission.Environments.Length -gt 0 -and $newPermission.Environments.Length -gt 0) { foreach ($item in $newPermission.Environments) { $existingItem = $existingPermission.Environments | Where-Object { $_.Id -eq $item.Id } if ($null -eq $existingItem) { Write-OctopusVerbose "$($user.DisplayName) is not yet scoped to the environment $($item.Name), adding it." $existingPermission.Environments += $item } } } if ($existingPermission.Tenants.Length -gt 0 -and $newPermission.Tenants.Length -eq 0) { Write-OctopusVerbose "$($user.DisplayName) has scoping to tenants, but the new permission doesn't have any tenant scoping, removing the scoping" $existingPermission.Tenants = @() } elseif ($existingPermission.Tenants.Length -gt 0 -and $newPermission.Tenants.Length -gt 0) { foreach ($item in $newPermission.Tenants) { $existingItem = $existingPermission.Tenants | Where-Object { $_.Id -eq $item.Id } if ($null -eq $existingItem) { Write-OctopusVerbose "$($user.DisplayName) is not yet scoped to the tenant $($item.Name), adding it." $existingPermission.Tenants += $item } } } return @($projectPermissionList) } function Write-PermissionList { param ( $permissionName, $permissionList, $permission, $reportPath ) foreach ($permissionScope in $permissionList) { $permissionForCSV = @{ Space = $permission.SpaceName Project = $permission.Name PermissionName = $permissionName User = $permissionScope.DisplayName EnvironmentScope = "" TenantScope = "" } if ($permissionScope.IncludeScope -eq $false) { $permissionForCSV.EnvironmentScope = "N/A" $permissionForCSV.TenantScope = "N/A" } else { if ($permissionScope.Environments.Length -eq 0) { $permissionForCSV.EnvironmentScope = "All" } else { $permissionForCSV.EnvironmentScope = $($permissionScope.Environments | Select-Object -ExpandProperty Name) -join ";" } if ($permissionScope.TenantScope.Length -eq 0) { $permissionForCSV.TenantScope = "All" } else { $permissionForCSV.TenantScope = $($permissionScope.Tenants | Select-Object -ExpandProperty Name) -join ";" } } $permissionAsString = """$($permissionForCSV.Space)"",""$($permissionForCSV.Project)"",""$($permissionForCSV.PermissionName)"",""$($permissionForCSV.User)"",""$($permissionForCSV.EnvironmentScope)"",""$($permissionForCSV.TenantScope)""" Add-Content -Path $reportPath -Value $permissionAsString } } function Get-EnvironmentsScopedToProject { param ( $project, $octopusApiKey, $octopusUrl, $spaceId ) $scopedEnvironmentList = @() $projectChannels = Get-OctopusItemList -itemType "Channels" -endpoint "projects/$($project.Id)/channels" -spaceId $spaceId -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($channel in $projectChannels) { $lifecycleId = $channel.LifecycleId if ($null -eq $lifecycleId) { $lifecycleId = $project.LifecycleId } $lifecyclePreview = Invoke-OctopusApi -octopusUrl $octopusUrl -apiKey $octopusApiKey -endPoint "lifecycles/$($project.LifeCycleId)/preview" -spaceId $spaceId -method "GET" foreach ($phase in $lifecyclePreview.Phases) { foreach ($environmentId in $phase.AutomaticDeploymentTargets) { if ($scopedEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "Adding $environmentId to $($project.Name) environment list" $scopedEnvironmentList += $environmentId } } foreach ($environmentId in $phase.OptionalDeploymentTargets) { if ($scopedEnvironmentList -notcontains $environmentId) { Write-OctopusVerbose "Adding $environmentId to $($project.Name) environment list" $scopedEnvironmentList += $environmentId } } } } return $scopedEnvironmentList } function New-OctopusFilteredList { param( $itemList, $itemType, $filters ) $filteredList = @() Write-OctopusSuccess "Creating filter list for $itemType with a filter of $filters" if ([string]::IsNullOrWhiteSpace($filters) -eq $false -and $null -ne $itemList) { $splitFilters = $filters -split "," foreach($item in $itemList) { foreach ($filter in $splitFilters) { Write-OctopusVerbose "Checking to see if $filter matches $($item.Name)" if ([string]::IsNullOrWhiteSpace($filter)) { continue } if (($filter).ToLower() -eq "all") { Write-OctopusVerbose "The filter is 'all' -> adding $($item.Name) to $itemType filtered list" $filteredList += $item } elseif ((Test-OctopusObjectHasProperty -propertyName "Name" -objectToTest $item) -and $item.Name -like $filter) { Write-OctopusVerbose "The filter $filter matches $($item.Name), adding $($item.Name) to $itemType filtered list" $filteredList += $item } elseif ((Test-OctopusObjectHasProperty -propertyName "DisplayName" -objectToTest $item) -and $item.DisplayName -like $filter) { Write-OctopusVerbose "The filter $filter matches $($item.DisplayName), adding $($item.DisplayName) to $itemType filtered list" $filteredList += $item } else { Write-OctopusVerbose "The item $($item.Name) does not match filter $filter" } } } } else { Write-OctopusWarning "The filter for $itemType was not set." } return $filteredList } $spaceList = Get-OctopusItemList -itemType "Spaces" -endpoint "spaces" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $spaceList = New-OctopusFilteredList -itemType "Spaces" -itemList $spaceList -filters $spaceFilter $userRolesList = Get-OctopusItemList -itemType "User Roles" -endpoint "userroles" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $userList = Get-OctopusItemList -itemType "Users" -endpoint "users" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $userList = New-OctopusFilteredList -itemType "Users" -itemList $userList -filters $userFilter $permissionsReport = @() foreach ($space in $spaceList) { $projectList = Get-OctopusItemList -itemType "Projects" -endpoint "projects" -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $projectList = New-OctopusFilteredList -itemType "Projects" -itemList $projectList -filters $projectFilter $environmentList = Get-OctopusItemList -itemType "Environments" -endpoint "environments" -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $tenantList = Get-OctopusItemList -itemType "Tenants" -endpoint "tenants" -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($project in $projectList) { $projectPermission = @{ Name = $project.Name SpaceName = $space.Name ViewPermissions = @() EditPermissions = @() ViewDeploymentProcessPermissions = @() EditProcessPermissions = @() ViewVariablesPermissions = @() EditVariablesPermissions = @() EditVariableUnscopedPermissions = @() ViewVariableUnscopedPermissions = @() CreateReleasePermissions = @() ReleaseViewPermissions = @() DeployReleasePermissions = @() ViewDeploymentPermissions = @() ArtifactViewPermissions = @() RunbookViewPermissions = @() RunbookEditPermissions = @() RunbookRunViewPermissions = @() RunbookRunCreatePermissions = @() ManualInterventionViewPermissions = @() ManualInterventionApprovePermissions = @() TenantViewPermissions = @() TenantEditPermissions = @() TenantDeletePermissions = @() TenantCreatePermissions = @() LibraryVariableSetCreatePermissions = @() LibraryVariableSetViewPermissions = @() LibraryVariableSetDeletePermissions = @() LibraryVariableSetEditPermissions = @() } $projectEnvironmentList = @(Get-EnvironmentsScopedToProject -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl -spaceId $space.Id -project $project) foreach ($user in $userList) { $userTeamList = Get-OctopusItemList -itemType "User $($user.DisplayName) Teams" -endpoint "users/$($user.Id)/teams?spaces=$($space.Id)&includeSystem=True" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($userTeam in $userTeamList) { $scopedRolesList = Get-OctopusItemList -itemType "Team $($userTeam.Name) Scoped Roles" -endpoint "teams/$($userTeam.Id)/scopeduserroles" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($scopedRole in $scopedRolesList) { if ($scopedRole.SpaceId -ne $space.Id) { Write-OctopusVerbose "The scoped role is not for the current space, moving on to next role." continue } if ($scopedRole.ProjectIds.Length -gt 0 -and $scopedRole.ProjectIds -notcontains $project.Id -and $scopedRole.ProjectGroupIds.Length -eq 0) { Write-OctopusVerbose "The scoped role is associated with projects, but not $($project.Name), moving on to next role." continue } if ($scopedRole.ProjectGroupIds.Length -gt 0 -and $scopedRole.ProjectGroupIds -notcontains $project.ProjectGroupId -and $scopedRole.ProjectIds.Length -eq 0) { Write-OctopusVerbose "The scoped role is associated with projects groups, but not the one for $($project.Name), moving on to next role." continue } $userRole = $userRolesList | Where-Object {$_.Id -eq $scopedRole.UserRoleId} $projectPermission.ViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ViewPermissions -permissionToCheck "ProjectView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.EditPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.EditPermissions -permissionToCheck "ProjectEdit" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.ViewDeploymentProcessPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ViewDeploymentProcessPermissions -permissionToCheck "ProcessView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.EditProcessPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.EditProcessPermissions -permissionToCheck "ProcessEdit" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.ViewVariablesPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ViewVariablesPermissions -permissionToCheck "VariableView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.EditVariablesPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.EditVariablesPermissions -permissionToCheck "VariableEdit" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.EditVariableUnscopedPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.EditVariableUnscopedPermissions -permissionToCheck "VariableViewUnscoped" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.ViewVariableUnscopedPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ViewVariableUnscopedPermissions -permissionToCheck "VariableEditUnscoped" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.LibraryVariableSetCreatePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.LibraryVariableSetCreatePermissions -permissionToCheck "LibraryVariableSetCreate" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.LibraryVariableSetEditPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.LibraryVariableSetEditPermissions -permissionToCheck "LibraryVariableSetEdit" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.LibraryVariableSetDeletePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.LibraryVariableSetDeletePermissions -permissionToCheck "LibraryVariableSetDelete" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.LibraryVariableSetViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.LibraryVariableSetViewPermissions -permissionToCheck "LibraryVariableSetView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.CreateReleasePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.CreateReleasePermissions -permissionToCheck "ReleaseCreate" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.ReleaseViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ReleaseViewPermissions -permissionToCheck "ReleaseView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.DeployReleasePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.DeployReleasePermissions -permissionToCheck "DeploymentCreate" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.ViewDeploymentPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ViewDeploymentPermissions -permissionToCheck "DeploymentView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.ArtifactViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ArtifactViewPermissions -permissionToCheck "ArtifactView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.ManualInterventionViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ManualInterventionViewPermissions -permissionToCheck "InterruptionView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.ManualInterventionApprovePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.ManualInterventionApprovePermissions -permissionToCheck "InterruptionViewSubmitResponsible" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.RunbookViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.RunbookViewPermissions -permissionToCheck "RunbookView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.RunbookEditPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.RunbookEditPermissions -permissionToCheck "RunbookEdit" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $false -projectEnvironmentList $projectEnvironmentList) $projectPermission.RunbookRunViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.RunbookRunViewPermissions -permissionToCheck "RunbookRunView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.RunbookRunCreatePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.RunbookRunCreatePermissions -permissionToCheck "RunbookRunCreate" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.TenantCreatePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.TenantCreatePermissions -permissionToCheck "TenantCreate" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.TenantEditPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.TenantEditPermissions -permissionToCheck "TenantEdit" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.TenantDeletePermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.TenantDeletePermissions -permissionToCheck "TenantDelete" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) $projectPermission.TenantViewPermissions = @(Get-UserPermission -space $space -project $project -userRole $userRole -projectPermissionList $projectPermission.TenantViewPermissions -permissionToCheck "TenantView" -environmentList $environmentList -tenantList $tenantList -user $user -scopedRole $scopedRole -includeScope $true -projectEnvironmentList $projectEnvironmentList) } } } $permissionsReport += $projectPermission } } if (Test-Path $reportPath) { Remove-Item $reportPath } New-Item $reportPath -ItemType File Add-Content -Path $reportPath -Value "Space Name,Project Name,Permission Name,Display Name,Environment Scoping,Tenant Scoping" foreach ($permission in $permissionsReport) { Write-PermissionList -permissionName "View Project" -permissionList $permission.ViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Edit Project" -permissionList $permission.EditPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "View Variables" -permissionList $permission.ViewVariablesPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Edit Variables" -permissionList $permission.EditVariablesPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "View Variable Unscoped" -permissionList $permission.ViewVariableUnscopedPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Edit Variable Unscoped" -permissionList $permission.EditVariableUnscopedPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Library Variable Set View" -permissionList $permission.LibraryVariableSetViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Library Variable Set Edit" -permissionList $permission.LibraryVariableSetEditPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Library Variable Set Create" -permissionList $permission.LibraryVariableSetCreatePermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Library Variable Set Delete" -permissionList $permission.LibraryVariableSetDelete -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Tenant View" -permissionList $permission.TenantViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Tenant Edit" -permissionList $permission.TenantEditPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Tenant Create" -permissionList $permission.TenantCreatePermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Tenant Delete" -permissionList $permission.TenantDeletePermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Runbook View" -permissionList $permission.RunbookViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Runbook Edit" -permissionList $permission.RunbookEditPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Runbook Run View" -permissionList $permission.RunbookRunViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Runbook Run Create" -permissionList $permission.RunbookRunCreatePermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "View Deployment Process" -permissionList $permission.ViewDeploymentProcessPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Edit Deployment Process" -permissionList $permission.EditProcessPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "View Release" -permissionList $permission.ReleaseViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Create Release" -permissionList $permission.CreateReleasePermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Deployment View" -permissionList $permission.ViewDeploymentPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Deploy Release" -permissionList $permission.DeployReleasePermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Artifact View" -permissionList $permission.ArtifactViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Manual Intervention View" -permissionList $permission.ManualInterventionViewPermissions -permission $permission -reportPath $reportPath Write-PermissionList -permissionName "Manual Intervention Approve (when assigned to team)" -permissionList $permission.ManualInterventionApprovePermissions -permission $permission -reportPath $reportPath } ```
# Project Release Deployment Targets Report Source: https://octopus.com/docs/octopus-rest-api/examples/reports/project-release-deployment-targets-report.md The Octopus Web Portal allows you to see what deployments have gone out to a specific deployment target, but it doesn't provide you with a list of deployment targets for each deployment. This script demonstrates how to generate a report of the deployment targets for a specific release version for a project. :::figure ![Sample project release deployment target report](/docs/img/octopus-rest-api/examples/reports/images/project-release-deployment-target-report.png) ::: **Please note:** The report is generated as a CSV file, formatting was added to the screenshot to make it easier to read. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Report Path - Space Name - Project Name - Release Version
PowerShell (REST API) ```powershell $octopusUrl = "https://your-octopus-url" $octopusApiKey = "API-YOUR-KEY" $reportPath = "./Report.csv" $spaceName = "Default" $projectName = "Hello World" $releaseVersion = "3.0.6" $cachedResults = @{} function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item, $ignoreCache ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose $body Write-OctopusInformation "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } if (($null -eq $ignoreCache -or $ignoreCache -eq $false) -and $method.ToUpper().Trim() -eq "GET") { Write-OctopusVerbose "Checking to see if $url is already in the cache" if ($cachedResults.ContainsKey($url) -eq $true) { Write-OctopusVerbose "$url is already in the cache, returning the result" return $cachedResults[$url] } } else { Write-OctopusVerbose "Ignoring cache." } Write-OctopusVerbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' if ($cachedResults.ContainsKey($url) -eq $true) { $cachedResults.Remove($url) } Write-OctopusVerbose "Adding $url to the cache" $cachedResults.add($url, $result) return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-OctopusCritical "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-OctopusCritical "Forbidden error returned from $url, please verify API key and try again" } else { Write-OctopusVerbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-OctopusVerbose $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details" } function Get-OctopusItemList { param( $itemType, $endpoint, $spaceId, $octopusUrl, $octopusApiKey ) if ($null -ne $spaceId) { Write-OctopusVerbose "Pulling back all the $itemType in $spaceId" } else { Write-OctopusVerbose "Pulling back all the $itemType for the entire instance" } if ($endPoint -match "\?+") { $endpointWithParams = "$($endPoint)&skip=0&take=10000" } else { $endpointWithParams = "$($endPoint)?skip=0&take=10000" } $itemList = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint $endpointWithParams -spaceId $spaceId -apiKey $octopusApiKey -method "GET" if ($itemList -is [array]) { Write-OctopusVerbose "Found $($itemList.Length) $itemType." return ,$itemList } else { Write-OctopusVerbose "Found $($itemList.Items.Length) $itemType." return ,$itemList.Items } } function Get-OctopusItemByName { param ( $itemType, $itemName, $endPoint, $spaceId, $octopusUrl, $octopusApiKey ) $itemList = Get-OctopusItemList -endpoint "$($endpoint)?partialName=$([uri]::EscapeDataString($itemName))" -itemType $itemType -spaceId $spaceId -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $filteredItem = $itemList | Where-Object { $_.Name.ToLower().Trim() -eq $itemName.ToLower().Trim() } if ($null -eq $filteredItem) { Write-OctopusInformation "Unable to find the $itemType $itemName" exit 1 } return $filteredItem } $space = Get-OctopusItemByName -itemType "Space" -endPoint "spaces" -itemName $spaceName -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $project = Get-OctopusItemByName -itemType "Project" -endPoint "projects" -itemName $projectName -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $environmentList = Get-OctopusItemList -itemType "Environments" -endpoint "environments" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $deploymentTargetList = Get-OctopusItemList -itemType "DeploymentTargets" -endpoint "machines" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $releaseList = Get-OctopusItemList -itemType "Releases" -endpoint "projects/$($project.Id)/releases?searchByVersion=$($releaseVersion)" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $release = $releaseList | Where-Object { $_.Version -eq $releaseVersion } if ($null -eq $release) { Write-OctopusInformation "Unable to find the release $releaseVersion for $projectName" exit 1 } $deployedToMachineList = @() $deploymentList = Get-OctopusItemList -itemType "Deployments" -endpoint "releases/$($release.Id)/deployments" -spaceId $($space.Id) -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey foreach ($deployment in $deploymentList) { if ($deployment.FailureEncountered -eq $true) { Write-OctopusInformation "The deployment $($deployment.Name) encountered a failure, assuming it wasn't successfully deployed. Moving onto next deployment." continue } $environment = $environmentList | Where-Object { $_.Id -eq $deployment.EnvironmentId } $deployedToMachine = @{ DeploymentName = $deployment.Name Environment = $environment DeployedToMachines = @() } foreach ($machineId in $deployment.DeployedToMachineIds) { $machine = $deploymentTargetList | Where-Object { $_.Id -eq $machineId } $deployedToMachine.DeployedToMachines += $machine } $deployedToMachineList += $deployedToMachine } if (Test-Path $reportPath) { Remove-Item $reportPath } New-Item $reportPath -ItemType File Add-Content -Path $reportPath -Value "Deployment Name,Environment Name,Machine Name,Machine Id" Foreach ($deployedToMachine in $deployedToMachineList) { foreach ($machine in $deployedToMachine.DeployedToMachines) { Add-Content -Path $reportPath -Value "$($deployedToMachine.DeploymentName),$($deployedToMachine.Environment.Name),$($machine.Name),$($machine.Id)" } } ```
# Add a script step to runbook Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/add-script-step-to-runbook.md This script demonstrates how to programmatically add a simple PowerShell script to a runbook. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the runbook - Source PowerShell script - *Optional* Target role to run the script against. :::div{.hint} **Note:** The source script provided to Octopus must be properly escaped. ::: ## Script
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; var runbookName = "MyRunbook"; var stepName = "My new step"; var role = "target-role"; var scriptToRun = "Write-Host \"Hello World\" "; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get runbook var runbook = repositoryForSpace.Runbooks.FindOne(n => n.Name == runbookName); var processId = runbook.RunbookProcessId; var runbookProcess = repositoryForSpace.RunbookProcesses.Get(processId); // Check for existing step. if (runbookProcess.Steps.Any(s => s.Name == stepName)) { Console.WriteLine("Existing step present with same name, please check and try again"); return; } // Create PowerShell script step var step = new Octopus.Client.Model.DeploymentStepResource { Name = stepName, Condition = DeploymentStepCondition.Success, PackageRequirement = DeploymentStepPackageRequirement.LetOctopusDecide, StartTrigger = DeploymentStepStartTrigger.StartAfterPrevious }; var stepAction = new DeploymentActionResource { ActionType = "Octopus.Script", Condition = DeploymentActionCondition.Success, Name = stepName }; var runOnServer = false; if(!string.IsNullOrWhiteSpace(role)) { runOnServer = true; } // Add step action properties stepAction.Properties.Add("Octopus.Action.RunOnServer", new Octopus.Client.Model.PropertyValueResource(runOnServer.ToString())); stepAction.Properties.Add("Octopus.Action.Script.ScriptSource", new Octopus.Client.Model.PropertyValueResource("Inline")); stepAction.Properties.Add("Octopus.Action.Script.ScriptBody", new Octopus.Client.Model.PropertyValueResource(scriptToRun)); stepAction.Properties.Add("Octopus.Action.Script.Syntax", new Octopus.Client.Model.PropertyValueResource("PowerShell")); // optional target role if(!string.IsNullOrWhiteSpace(role)) { step.Properties.Add("Octopus.Action.TargetRoles", new Octopus.Client.Model.PropertyValueResource(role)); } // Add step to Actions step.Actions.Add(stepAction); // Add PowerShell script step to process runbookProcess.Steps.Add(step); // Update runbook process repositoryForSpace.RunbookProcesses.Modify(runbookProcess); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadLine(); return; } ```
# Create and publish a new runbook snapshot Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/create-and-publish-runbook.md This script demonstrates how to programmatically create a new runbook snapshot and publish it for use by runbook consumers. If the runbook references any packages from the [Octopus built-in repository](/docs/packaging-applications/package-repositories/built-in-repository), then the latest package versions will be included in the snapshot. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to work with - Name of the project with the runbook - Name of the runbook ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectName = "MyProject" $runbookName = "MyRunbook" # Get space $spaces = Invoke-RestMethod -Uri "$octopusURL/api/spaces?partialName=$([uri]::EscapeDataString($spaceName))&skip=0&take=100" -Headers $header $space = $spaces.Items | Where-Object { $_.Name -eq $spaceName } # Get project $projects = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/projects?partialName=$([uri]::EscapeDataString($projectName))&skip=0&take=100" -Headers $header $project = $projects.Items | Where-Object { $_.Name -eq $projectName } # Get runbook $runbooks = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/runbooks?partialName=$([uri]::EscapeDataString($runbookName))&skip=0&take=100" -Headers $header $runbook = $runbooks.Items | Where-Object { $_.Name -eq $runbookName } # Get a runbook snapshot template $runbookSnapshotTemplate = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/runbookProcesses/$($runbook.RunbookProcessId)/runbookSnapshotTemplate" -Headers $header # Create a runbook snapshot $body = @{ ProjectId = $project.Id RunbookId = $runbook.Id Name = $runbookSnapshotTemplate.NextNameIncrement Notes = $null SelectedPackages = @() } # Include latest built-in feed packages foreach($package in $runbookSnapshotTemplate.Packages) { # Get latest package version $packages = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/feeds/$($package.FeedId)/packages/versions?packageId=$($package.PackageId)" -Headers $header $latestPackage = $packages.Items | Select-Object -First 1 $package = @{ ActionName = $package.ActionName Version = $latestPackage.Version PackageReferenceName = $package.PackageReferenceName } $body.SelectedPackages += $package } $body = $body | ConvertTo-Json -Depth 10 $runbookPublishedSnapshot = Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/runbookSnapshots?publish=true" -Body $body -Headers $header # Re-get runbook $runbook = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/runbooks/$($runbook.Id)" -Headers $header # Publish the snapshot $runbook.PublishedRunbookSnapshotId = $runbookPublishedSnapshot.Id Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/runbooks/$($runbook.Id)" -Body ($runbook | ConvertTo-Json -Depth 10) -Headers $header Write-Host "Published runbook snapshot: $($runbookPublishedSnapshot.Id) ($($runbookPublishedSnapshot.Name))" ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $projectName = "MyProject" $runbook = "MyRunbook" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get runbook $runbook = $repositoryForSpace.Runbooks.FindByName($project, $runbook) # Get runbook snapshot template $runbookSnapshotTemplate = $repositoryForSpace.Runbooks.GetRunbookSnapshotTemplate($runbook) # Create a runbook snapshot $runbookSnapshot = New-Object Octopus.Client.Model.RunbookSnapshotResource $runbookSnapshot.ProjectId = $project.Id $runbookSnapshot.RunbookId = $runbook.Id $runbookSnapshot.Name = $runbookSnapshotTemplate.NextNameIncrement $runbookSnapshot.SpaceId = $space.Id # Add packages foreach ($package in $runbookSnapshotTemplate.Packages) { # Get the feed $feed = $repositoryForSpace.Feeds.Get($package.FeedId) $latestPackage = [Linq.Enumerable]::FirstOrDefault($repositoryForSpace.Feeds.GetVersions($feed, @($package.Id))) # Create selected package object $selectedPackage = New-Object Octopus.Client.Model.SelectedPackage -Property @{ ActionName = $package.ActionName Version = $latestPackage.Version PackageReferenceName = $package.PackageReferenceName } # Add to selected packages $runbookSnapshot.SelectedPackages.Add($selectedPackage) } #Publish snapshot $runbookSnapshot = $repositoryForSpace.RunbookSnapshots.Create($runbookSnapshot) $runbook.PublishedRunbookSnapshotId = $runbookSnapshot.Id $repositoryForSpace.Runbooks.Modify($runbook) ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System; using System.Linq; // If using .net Core, be sure to add the NuGet package of System.Security.Permissions var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var projectName = "MyProject"; var runbookName = "MyRunbook"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get space var space = repository.Spaces.FindByName(spaceName); var spaceRepository = client.ForSpace(space); // Get project var project = spaceRepository.Projects.FindByName(projectName); // Get runbook var runbook = spaceRepository.Runbooks.FindByName(project, runbookName); // Get runbook snapshot template var runbookSnapshotTemplate = spaceRepository.Runbooks.GetRunbookSnapshotTemplate(runbook); // Create runbook snapshot var runbookSnapshot = new Octopus.Client.Model.RunbookSnapshotResource(project.Id); runbookSnapshot.RunbookId = runbook.Id; runbookSnapshot.Name = runbookSnapshotTemplate.NextNameIncrement; runbookSnapshot.SpaceId = space.Id; // Add any referenced packages foreach (var package in runbookSnapshotTemplate.Packages) { // Get the feed var feed = spaceRepository.Feeds.Get(package.FeedId); var latestPackage = spaceRepository.Feeds.GetVersions(feed, (new string[] { package.PackageId })); // Create new selected package object var selectedPackage = new Octopus.Client.Model.SelectedPackage(package.ActionName, package.PackageReferenceName, latestPackage[0].Version); // Add to runbook snapshot runbookSnapshot.SelectedPackages.Add(selectedPackage); } // Publish snapshot runbookSnapshot = spaceRepository.RunbookSnapshots.Create(runbookSnapshot); runbook.PublishedRunbookSnapshotId = runbookSnapshot.Id; spaceRepository.Runbooks.Modify(runbook); ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' project_name = 'MyProject' runbook_name = 'MyRunbook' # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get project uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) project = next((x for x in projects if x['Name'] == project_name), None) # Get project runbooks uri = '{0}/api/{1}/projects/{2}/runbooks'.format(octopus_server_uri, space['Id'], project['Id']) runbooks = get_octopus_resource(uri, headers) runbook = next((x for x in runbooks if x['Name'] == runbook_name), None) # Get runbook snapshot template uri = '{0}/api/{1}/runbookprocesses/{2}/runbooksnapshottemplate'.format(octopus_server_uri, space['Id'], runbook['RunbookProcessId']) runbook_snapshot_template = get_octopus_resource(uri, headers) # Create runbook snapshot runbookSnapshotJson = { 'ProjectId': project['Id'], 'RunbookId': runbook['Id'], 'Name': runbook_snapshot_template['NextNameIncrement'], 'SelectedPackages': [] } # Include any referenced packages for package in runbook_snapshot_template['Packages']: uri = '{0}/api/{1}/feeds/{2}/packages/versions?packageId={3}'.format(octopus_server_uri, space['Id'], package['FeedId'], package['PackageId']) packages = get_octopus_resource(uri, headers) latestPackage = packages[0] # get the latest one selectedPackage = { 'ActionName': package['ActionName'], 'Version': latestPackage['Version'], 'PackageReferenceName': package['PackageReferenceName'] } runbookSnapshotJson['SelectedPackages'].append(selectedPackage) # Create snapshot uri = '{0}/api/{1}/runbooksnapshots'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=runbookSnapshotJson) response.raise_for_status() # Get results of API call runbookSnapshot = json.loads(response.content.decode('utf-8')) # Update the runbook object runbook['PublishedRunbookSnapshotId'] = runbookSnapshot['Id'] uri = '{0}/api/{1}/runbooks/{2}'.format(octopus_server_uri, space['Id'], runbook['Id']) response = requests.put(uri, headers=headers, json=runbook) response.raise_for_status() ```
Go ```go package main import ( "bytes" "encoding/json" "fmt" "log" "net/http" "net/url" "io/ioutil" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type runbooksnapshot struct { ProjectID string RunbookID string Name string Notes string SelectedPackages []octopusdeploy.SelectedPackage } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YourAPI" spaceName := "Default" projectName := "MyProject" runbookName := "MyRunbook" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Get runbook runbook := GetRunbook(client, project, runbookName) // Get runbook snapshot template runbookSnapshotTemplate := GetRunbookSnapshotTemplate(apiURL, APIKey, space, runbook) // Create runbook snapshot object runbookSnapshot := runbooksnapshot{ ProjectID: project.ID, RunbookID: runbook.ID, Name: runbookSnapshotTemplate["NextNameIncrement"].(string), } runbookPackages := runbookSnapshotTemplate["Packages"].([]interface{}) for i := 0; i < len(runbookPackages); i++ { runbookPackage := runbookPackages[i].(map[string]interface{}) version := GetPackageVersion(apiURL, APIKey, space, runbookPackage["FeedId"].(string), runbookPackage["PackageId"].(string)) selectedPackage := octopusdeploy.SelectedPackage{ ActionName: runbookPackage["ActionName"].(string), Version: version, PackageReferenceName: runbookPackage["PackageReferenceName"].(string), } runbookSnapshot.SelectedPackages = append(runbookSnapshot.SelectedPackages, selectedPackage) } // Create new snapshot runbookSnapshotId := CreateRunbookSnapshot(apiURL, APIKey, space, runbookSnapshot) // Publish snapshot runbook.PublishedRunbookSnapshotID = runbookSnapshotId client.Runbooks.Update(runbook) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func GetRunbook(client *octopusdeploy.Client, project *octopusdeploy.Project, runbookName string) *octopusdeploy.Runbook { // Get runbook runbooks, err := client.Runbooks.GetAll() if err != nil { log.Println(err) } for i := 0; i < len(runbooks); i++ { if runbooks[i].ProjectID == project.ID && runbooks[i].Name == runbookName { return runbooks[i] } } return nil } func GetRunbookSnapshotTemplate(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, runbook *octopusdeploy.Runbook) map[string]interface{} { // Query api for tasks templateApi := octopusURL.String() + "/api/" + space.ID + "/runbookprocesses/" + runbook.RunbookProcessID + "/runbooksnapshottemplate" // Create http client httpClient := &http.Client{} // perform request request, _ := http.NewRequest("GET", templateApi, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var f interface{} jsonErr := json.Unmarshal(responseData, &f) if jsonErr != nil { log.Println(err) } template := f.(map[string]interface{}) // return the template return template } func GetPackageVersion(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, feedId string, packageId string) string { packageApi := octopusURL.String() + "/api/" + space.ID + "/feeds/" + feedId + "/packages/versions?packageId=" + packageId // Create http client httpClient := &http.Client{} // perform request request, _ := http.NewRequest("GET", packageApi, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) response, err := httpClient.Do(request) if err != nil { log.Println(err) } responseData, err := ioutil.ReadAll(response.Body) var f interface{} jsonErr := json.Unmarshal(responseData, &f) if jsonErr != nil { log.Println(err) } // Map the returned data packageItems := f.(map[string]interface{}) // Returns the list of items, translate it to a map returnedItems := packageItems["Items"].([]interface{}) // We only need the most recent version mostRecentPackageVersion := returnedItems[0].(map[string]interface{}) return mostRecentPackageVersion["Version"].(string) } func CreateRunbookSnapshot(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, snapshot runbooksnapshot) string { snapshotApi := octopusURL.String() + "/api/" + space.ID + "/runbooksnapshots" // Create http client httpClient := &http.Client{} // Marshall the snapshot object snapshotJson, err := json.Marshal(snapshot) if err != nil { log.Println(err) } // Make post request request, err := http.NewRequest("POST", snapshotApi, bytes.NewBuffer(snapshotJson)) request.Header.Set("X-Octopus-ApiKey", APIKey) request.Header.Set("Content-Type", "application/json") // Execute post and get response response, err := httpClient.Do(request) responseData, err := ioutil.ReadAll(response.Body) var f interface{} jsonErr := json.Unmarshal(responseData, &f) if jsonErr != nil { log.Println(err) } runbookMap := f.(map[string]interface{}) return runbookMap["Id"].(string) } ```
# Create a runbook Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/create-runbook.md This script demonstrates how to programmatically create a runbook. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - Name of the runbook to create ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $projectName = "MyProject" $runbookName = "MyRunbook" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Create json payload $jsonPayload = @{ Name = $runbookName ProjectId = $project.Id EnvironmentScope = "All" RunRetentionPolicy = @{ QuantityToKeep = 100 ShouldKeepForever = $false } } # Create the runbook Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/runbooks" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $runbookName = "MyRunbook" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Create runbook retention object $runbookRetentionPolicy = New-Object Octopus.Client.Model.RunbookRetentionPeriod $runbookRetentionPolicy.QuantityToKeep = 100 $runbookRetentionPolicy.ShouldKeepForever = $false # Create runbook object $runbook = New-Object Octopus.Client.Model.RunbookResource $runbook.Name = $runbookName $runbook.ProjectId = $project.Id $runbook.RunRetentionPolicy = $runbookRetentionPolicy # Save $repositoryForSpace.Runbooks.Create($runbook) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; string runbookName = "MyRunbook"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Create runbook retention object var runbookRetentionPolicy = new Octopus.Client.Model.RunbookRetentionPeriod(); runbookRetentionPolicy.QuantityToKeep = 100; runbookRetentionPolicy.ShouldKeepForever = false; // Create runbook object var runbook = new Octopus.Client.Model.RunbookResource(); runbook.Name = runbookName; runbook.ProjectId = project.Id; runbook.RunRetentionPolicy = runbookRetentionPolicy; // Save repositoryForSpace.Runbooks.Create(runbook); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadLine(); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' project_name = 'Your Project Name' runbook_name = 'Your new runbook name' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project = get_by_name('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id']), project_name) runbook = { 'Id': None, 'Name': runbook_name, 'ProjectId': project['Id'], 'EnvironmentScope': 'All', 'RunRetentionPolicy': { 'QuantityToKeep': 100, 'ShouldKeepForever': False } } uri = '{0}/{1}/runbooks'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=runbook) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" runbookName := "MyRunbook" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Create new runbook runbook := octopusdeploy.NewRunbook(runbookName, project.ID) runbook.EnvironmentScope = "All" runbook.RunRetentionPolicy.QuantityToKeep = 100 runbook.RunRetentionPolicy.ShouldKeepForever = false client.Runbooks.Add(runbook) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
Java ```java import com.octopus.openapi.model.RunbookEnvironmentScope; import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Project; import com.octopus.sdk.domain.Runbook; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.runbook.RunbookResource; import com.octopus.sdk.model.runbook.RunbookRetentionPeriod; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class CreateRunbook { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final Optional project = space.get().projects().getByName("TheProjectName"); if (!project.isPresent()) { throw new IllegalArgumentException("No project named 'TheProjectName' exists on server"); } final RunbookResource newRunbook = new RunbookResource("TheRunbook"); final RunbookRetentionPeriod retentionPolicy = new RunbookRetentionPeriod(); retentionPolicy.setQuantityToKeep(100); retentionPolicy.setShouldKeepForever(false); newRunbook.projectId(project.get().getProperties().getId()); newRunbook.environmentScope(RunbookEnvironmentScope.ALL); newRunbook.runRetentionPolicy(retentionPolicy); final Runbook runbook = space.get().runbooks().create(newRunbook); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Create a new scheduled runbook trigger Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/create-scheduled-runbook-trigger.md This script demonstrates how to programmatically create a new [scheduled runbook trigger](/docs/runbooks/scheduled-runbook-trigger). The trigger will run once a day at the time specified, on the days specified, in the timezone chosen (default is `GMT Standard Time`). ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to work with - Name of the project with the runbook - Name of the runbook - Name of the scheduled trigger - Description of the scheduled trigger - List of environments to run the runbook in - Timezone for the schedule - List of the days of week to run the trigger on - The time to run the trigger each day, provided in the format `yyyy-MM-ddTHH:mm:ss.fffZ`. For example, `2021-07-22T09:00:00.000Z`. ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectName = "MyProject" $runbookName = "MyRunbook" # Specify runbook trigger name $runbookTriggerName = "RunbookTriggerName" # Specify runbook trigger description $runbookTriggerDescription = "RunbookTriggerDescription" # Specify which environments the runbook should run in $runbookEnvironmentNames = @("Development") # What timezone do you want the trigger scheduled for $runbookTriggerTimezone = "GMT Standard Time" # Remove any days you don't want to run the trigger on $runbookTriggerDaysOfWeekToRun = @("Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday") # Specify the start time to run the runbook each day in the format yyyy-MM-ddTHH:mm:ss.fffZ # See https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings?view=netframework-4.8 $runbookTriggerStartTime = "2021-07-22T09:00:00.000Z" # Script variables $runbookEnvironmentIds = @() # Get space $spaces = Invoke-RestMethod -Uri "$octopusURL/api/spaces?partialName=$([uri]::EscapeDataString($spaceName))&skip=0&take=100" -Headers $header $space = $spaces.Items | Where-Object { $_.Name -eq $spaceName } # Get project $projects = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/projects?partialName=$([uri]::EscapeDataString($projectName))&skip=0&take=100" -Headers $header $project = $projects.Items | Where-Object { $_.Name -eq $projectName } # Get runbook $runbooks = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/runbooks?partialName=$([uri]::EscapeDataString($runbookName))&skip=0&take=100" -Headers $header $runbook = $runbooks.Items | Where-Object { $_.Name -eq $runbookName } # Get environments for runbook trigger foreach($runbookEnvironmentName in $runbookEnvironmentNames) { $environments = Invoke-RestMethod -Uri "$octopusURL/api/$($space.Id)/environments?partialName=$([uri]::EscapeDataString($runbookEnvironmentName))&skip=0&take=100" -Headers $header $environment = $environments.Items | Where-Object { $_.Name -eq $runbookEnvironmentName } | Select-Object -First 1 $runbookEnvironmentIds += $environment.Id } # Create a runbook trigger $body = @{ ProjectId = $project.Id; Name = $runbookTriggerName; Description = $runbookTriggerDescription; IsDisabled = $False; Filter = @{ Timezone = $runbookTriggerTimezone; FilterType = "OnceDailySchedule"; DaysOfWeek = @($runbookTriggerDaysOfWeekToRun); StartTime = $runbookTriggerStartTime; }; Action = @{ ActionType = "RunRunbook"; RunbookId = $runbook.Id; EnvironmentIds = @($runbookEnvironmentIds); }; } # Convert body to JSON $body = $body | ConvertTo-Json -Depth 10 # Create runbook scheduled trigger $runbookScheduledTrigger = Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/projecttriggers" -Body $body -Headers $header Write-Host "Created runbook trigger: $($runbookScheduledTrigger.Id) ($runbookTriggerName)" ```
PowerShell (Octopus.Client) ```powershell # You can get this dll from your Octopus Server/Tentacle installation directory or from # https://www.nuget.org/packages/Octopus.Client/ # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $projectName = "MyProject" $runbookName = "MyRunbook" # Specify runbook trigger name $runbookTriggerName = "RunbookTriggerName" # Specify runbook trigger description $runbookTriggerDescription = "RunbookTriggerDescription" # Specify which environments the runbook should run in $runbookEnvironmentNames = @("Development") # What timezone do you want the trigger scheduled for $runbookTriggerTimezone = "GMT Standard Time" # Remove any days you don't want to run the trigger on $runbookTriggerDaysOfWeekToRun = [Octopus.Client.Model.DaysOfWeek]::Monday -bor [Octopus.Client.Model.DaysOfWeek]::Tuesday -bor [Octopus.Client.Model.DaysOfWeek]::Wednesday -bor [Octopus.Client.Model.DaysOfWeek]::Thursday -bor [Octopus.Client.Model.DaysOfWeek]::Friday -bor [Octopus.Client.Model.DaysOfWeek]::Saturday -bor [Octopus.Client.Model.DaysOfWeek]::Sunday # Specify the start time to run the runbook each day in the format yyyy-MM-ddTHH:mm:ss.fffZ # See https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings?view=netframework-4.8 $runbookTriggerStartTime = "2021-07-22T09:00:00.000Z" # Script variables $runbookEnvironmentIds = @() $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName); # Get runbook $runbook = $repositoryForSpace.Runbooks.FindByName($runbookName); foreach($environmentName in $runbookEnvironmentNames) { $environment = $repositoryForSpace.Environments.FindByName($environmentName); $runbookEnvironmentIds += $environment.Id } $runbookScheduledTrigger = New-Object Octopus.Client.Model.ProjectTriggerResource $runbookScheduledTriggerFilter = New-Object Octopus.Client.Model.Triggers.ScheduledTriggers.OnceDailyScheduledTriggerFilterResource $runbookScheduledTriggerFilter.Timezone = $runbookTriggerTimezone $runbookScheduledTriggerFilter.StartTime = (Get-Date -Date $runbookTriggerStartTime) $runbookScheduledTriggerFilter.DaysOfWeek = $runbookTriggerDaysOfWeekToRun $runbookScheduledTriggerAction = New-Object Octopus.Client.Model.Triggers.RunRunbookActionResource $runbookScheduledTriggerAction.RunbookId = $runbook.Id $runbookScheduledTriggerAction.EnvironmentIds = New-Object Octopus.Client.Model.ReferenceCollection($runbookEnvironmentIds) $runbookScheduledTrigger.ProjectId = $project.Id $runbookScheduledTrigger.Name = $runbookTriggerName $runbookScheduledTrigger.Description = $runbookTriggerDescription $runbookScheduledTrigger.IsDisabled = $False $runbookScheduledTrigger.Filter = $runbookScheduledTriggerFilter $runbookScheduledTrigger.Action = $runbookScheduledTriggerAction $createdRunbookTrigger = $repositoryForSpace.ProjectTriggers.Create($runbookScheduledTrigger); Write-Host "Created runbook trigger: $($createdRunbookTrigger.Id) ($runbookTriggerName)" ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; // Define working variables string spaceName = "default"; string projectName = "MyProject"; string runbookName = "MyRunbook"; // Specify runbook trigger name string runbookTriggerName = "RunbookTriggerName"; // Specify runbook trigger description string runbookTriggerDescription = "RunbookTriggerDescription"; // Specify which environments the runbook should run in List runbookEnvironmentNames = new List() { "Development" }; // What timezone do you want the trigger scheduled for string runbookTriggerTimezone = "GMT Standard Time"; // Remove any days you don't want to run the trigger on // Bitwise operator to add all days by default Octopus.Client.Model.DaysOfWeek runbookTriggerDaysOfWeekToRun = DaysOfWeek.Monday | DaysOfWeek.Tuesday | DaysOfWeek.Wednesday | DaysOfWeek.Thursday | DaysOfWeek.Friday | DaysOfWeek.Saturday | DaysOfWeek.Sunday; // Specify the start time to run the runbook each day in the format yyyy-MM-ddTHH:mm:ss.fffZ // See https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings?view=netframework-4.8 string runbookTriggerStartTime = "2021-07-22T09:00:00.000Z"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get runbook var runbook = repositoryForSpace.Runbooks.FindByName(project, runbookName); // Get environments for runbook trigger List environmentIds = new List(); foreach (var environmentName in runbookEnvironmentNames) { var environment = repositoryForSpace.Environments.FindByName(environmentName); environmentIds.Add(environment.Id); } // Create scheduled trigger ProjectTriggerResource runbookScheduledTrigger = new ProjectTriggerResource { ProjectId = project.Id, Name = runbookTriggerName, Description = runbookTriggerDescription, IsDisabled = false, Filter = new OnceDailyScheduledTriggerFilterResource() { Timezone = runbookTriggerTimezone, StartTime = DateTime.Parse(runbookTriggerStartTime), DaysOfWeek = runbookTriggerDaysOfWeekToRun }, Action = new Octopus.Client.Model.Triggers.RunRunbookActionResource { RunbookId = runbook.Id, EnvironmentIds = new ReferenceCollection(environmentIds) } }; // Create runbook scheduled trigger var createdRunbookTrigger = repositoryForSpace.ProjectTriggers.Create(runbookScheduledTrigger); Console.WriteLine("Created runbook trigger: {0} ({1})", createdRunbookTrigger.Id, runbookTriggerName); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadLine(); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) def get_item_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources['Items'] if x['Name'] == name), None) # Define variables space_name = 'Default' project_name = 'Your Project Name' runbook_name = 'Your runbook name' runbook_trigger_name = 'Your runbook trigger name' runbook_trigger_description = 'Your runbook trigger description' runbook_trigger_environments = ['Development', 'Test'] runbook_trigger_timezone = 'GMT Standard Time' runbook_trigger_schedule_days_of_week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] runbook_trigger_schedule_start_time = '2021-07-22T09:00:00.000Z' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project = get_by_name('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id']), project_name) runbook = get_item_by_name('{0}/{1}/projects/{2}/runbooks'.format(octopus_server_uri, space['Id'], project['Id']), runbook_name) environments = get_octopus_resource('{0}/{1}/environments/all'.format(octopus_server_uri, space['Id'])) runbook_environment_ids = [environment['Id'] for environment in environments if environment['Name'] in runbook_trigger_environments] scheduled_runbook_trigger = { 'ProjectId': project['Id'], 'Name': runbook_trigger_name, 'Description': runbook_trigger_name, 'IsDisabled': False, 'Filter': { 'Timezone': runbook_trigger_timezone, 'FilterType': 'OnceDailySchedule', 'DaysOfWeek': runbook_trigger_schedule_days_of_week, 'StartTime': runbook_trigger_schedule_start_time }, 'Action': { 'ActionType': 'RunRunbook', 'RunbookId': runbook['Id'], 'EnvironmentIds': runbook_environment_ids } } uri = '{0}/{1}/projecttriggers'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=scheduled_runbook_trigger) response.raise_for_status() ```
# Publish a runbook snapshot Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/publish-runbook.md This script demonstrates how to programmatically publish an *existing* runbook snapshot. To learn how to create a new snapshot and publish it see [this example](/docs/octopus-rest-api/examples/runbooks/create-and-publish-runbook). ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to work with - Name of the project with the runbook - Name of the runbook - Name of the snapshot to publish ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $projectName = "MyProject" $runbookName = "MyRunbook" $snapshotName = "Snapshot XXXXX" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get runbook $runbook = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/runbooks" -Headers $header).Items | Where-Object {$_.Name -eq $runbookName} # Get the runbook process $runbookSnapshot = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/runbookSnapshots" -Headers $header).Items | Where-Object {$_.Name -eq $snapshotName} # Publish the snapshot $runbook.PublishedRunbookSnapshotId = $runbookSnapshot.Id Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/runbooks/$($runbook.Id)" -Body ($runbook | ConvertTo-Json -Depth 10) -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $runbookName = "MyRunbook" $snapshotName = "Snapshot XXXXX" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get runbook $runbook = $repositoryForSpace.Runbooks.FindByName($project, $runbookName) # Get the runbook snapshot $runbookSnapshot = $repositoryForSpace.RunbookSnapshots.FindOne({param($r) $r.Name -eq $snapshotName -and $r.ProjectId -eq $project.Id}) # Publish snapshot $runbook.PublishedRunbookSnapshotId = $runbookSnapshot.Id $repositoryForSpace.Runbooks.Modify($runbook) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; string runbookName = "MyRunbook"; string snapshotName = "Snapshot XXXXX"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get runbook var runbook = repositoryForSpace.Runbooks.FindByName(project, runbookName); // Get runbook snapshot var runbookSnapshot = repositoryForSpace.RunbookSnapshots.FindOne(rs => rs.ProjectId == project.Id && rs.Name == snapshotName); // Publish the snapshot runbook.PublishedRunbookSnapshotId = runbookSnapshot.Id; repositoryForSpace.Runbooks.Modify(runbook); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) def get_item_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources['Items'] if x['Name'] == name), None) space_name = 'Default' project_name = 'Your project' runbook_name = 'Your runbook' snapshot_name = 'Snapshot XXXXX' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project = get_by_name('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id']), project_name) runbook = get_item_by_name('{0}/{1}/projects/{2}/runbooks'.format(octopus_server_uri, space['Id'], project['Id']), runbook_name) snapshot = get_item_by_name('{0}/{1}/projects/{2}/runbookSnapshots/'.format(octopus_server_uri, space['Id'], project['Id']), snapshot_name) runbook['PublishedRunbookSnapshotId'] = snapshot['Id'] uri = '{0}/{1}/runbooks/{2}'.format(octopus_server_uri, space['Id'], runbook['Id']) response = requests.put(uri, headers=headers, json=runbook) response.raise_for_status() ```
Go ```go package main import ( "encoding/json" "log" "net/url" "net/http" "io/ioutil" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" runbookName := "MyRunbook" snapshotName := "Snapshot XXXXX" // Get the space object space := GetSpace(apiURL, APIKey, spaceName) // Get client for space client := octopusAuth(apiURL, APIKey, space.ID) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Get runbook runbook := GetRunbook(client, project, runbookName) // Get runbook snapshot runbookSnapshotId := GetRunbookSnapshot(apiURL, APIKey, space, runbook, snapshotName) // Update the runbook runbook.PublishedRunbookSnapshotID = runbookSnapshotId client.Runbooks.Update(runbook) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetRunbook(client *octopusdeploy.Client, project *octopusdeploy.Project, runbookName string) *octopusdeploy.Runbook { // Get runbook runbooks, err := client.Runbooks.GetAll() if err != nil { log.Println(err) } for i := 0; i < len(runbooks); i++ { if runbooks[i].ProjectID == project.ID && runbooks[i].Name == runbookName { return runbooks[i] } } return nil } func GetRunbookSnapshot(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, runbook *octopusdeploy.Runbook, snapshotName string) string { snapshotApi := octopusURL.String() + "/api/" + space.ID + "/runbooks/" + runbook.ID + "/runbooksnapshots" // Create http client httpClient := &http.Client{} // Make post request request, err := http.NewRequest("GET", snapshotApi, nil) request.Header.Set("X-Octopus-ApiKey", APIKey) request.Header.Set("Content-Type", "application/json") // Execute post and get response response, err := httpClient.Do(request) responseData, err := ioutil.ReadAll(response.Body) var f interface{} jsonErr := json.Unmarshal(responseData, &f) if jsonErr != nil { log.Println(err) } runbookSnapshotMap := f.(map[string]interface{}) runbookSnapshotItems := runbookSnapshotMap["Items"].([]interface{}) for _, snapshot := range runbookSnapshotItems { entry := snapshot.(map[string]interface{}) if entry["Name"].(string) == snapshotName { return entry["Id"].(string) } } return "" } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
# Run a published runbook Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/run-runbook.md This script demonstrates how to programmatically run a published runbook. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space - Name of the project - Name of the runbook - Array of environment names - *Optional* tenant name ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url/api" $octopusAPIKey = "API-YOUR-KEY" $headers = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectName = "MyProject" $runbookName = "MyRunbook" $environmentNames = @("Development", "Staging") # Get space $spaces = Invoke-WebRequest -Uri "$octopusURL/spaces/all" -Headers $headers -ErrorVariable octoError | ConvertFrom-Json $space = $spaces | Where-Object { $_.Name -eq $spaceName } Write-Host "Using Space named $($space.Name) with id $($space.Id)" # Create space specific url $octopusSpaceUrl = "$octopusURL/$($space.Id)" # Create the release body $createRunbookRunCommandV1 = @{ SpaceId = $space.Id SpaceIdOrName = $spaceName ProjectName = $projectName RunbookName = $runbookName EnvironmentNames = $environmentNames } | ConvertTo-Json # Run runbook Invoke-RestMethod -Method POST -Uri "$octopusSpaceUrl/runbook-runs/create/v1" -Body $createRunbookRunCommandV1 -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "c:\octopus.client\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $runbookName = "MyRunbook" $environmentNames = @("Test", "Production") # Optional Tenant $tenantName = "" $tenantId = $null $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get runbook $runbook = $repositoryForSpace.Runbooks.FindMany({param($r) $r.Name -eq $runbookName}) | Where-Object {$_.ProjectId -eq $project.Id} # Get environments $environments = $repositoryForSpace.Environments.GetAll() | Where-Object {$environmentNames -contains $_.Name} # Optionally get tenant if (![string]::IsNullOrEmpty($tenantName)) { $tenant = $repositoryForSpace.Tenants.FindByName($tenantName) $tenantId = $tenant.Id } # Loop through environments foreach ($environment in $environments) { # Create a new runbook run object $runbookRun = New-Object Octopus.Client.Model.RunbookRunResource $runbookRun.EnvironmentId = $environment.Id $runbookRun.ProjectId = $project.Id $runbookRun.RunbookSnapshotId = $runbook.PublishedRunbookSnapshotId $runbookRun.RunbookId = $runbook.Id $runbookRun.TenantId = $tenantId # Execute runbook $repositoryForSpace.RunbookRuns.Create($runbookRun) } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; string runbookName = "MyRunbook"; string[] environmentNames = { "Development", "Production" }; // Optional tenantName string tenantName = ""; string tenantId = null; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get runbook var runbook = repositoryForSpace.Runbooks.FindMany(n => n.Name == runbookName && n.ProjectId == project.Id)[0]; // Optional - tenant if (!string.IsNullOrWhiteSpace(tenantName)) { var tenant = repositoryForSpace.Tenants.FindByName(tenantName); tenantId = tenant.Id; } // Get environments foreach (var environmentName in environmentNames) { // Get environment var environment = repositoryForSpace.Environments.FindByName(environmentName); // Create runbook run object Octopus.Client.Model.RunbookRunResource runbookRun = new RunbookRunResource(); runbookRun.EnvironmentId = environment.Id; runbookRun.RunbookId = runbook.Id; runbookRun.ProjectId = project.Id; runbookRun.RunbookSnapshotId = runbook.PublishedRunbookSnapshotId; runbookRun.TenantId = tenantId; // Execute runbook repositoryForSpace.RunbookRuns.Create(runbookRun); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' project_name = 'Your Project' runbook_name = 'Your Runbook' environment_names = ['Development', 'Test'] environments = [] # Optional tenant Name tenant_name = '' tenantId = None space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) project = get_by_name('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id']), project_name) runbook = get_by_name('{0}/{1}/runbooks/all'.format(octopus_server_uri, space['Id']), runbook_name) if tenant_name: tenant = get_by_name('{0}/{1}/tenants/all'.format(octopus_server_uri, space['Id']), tenant_name) tenantId = tenant['Id'] environments = get_octopus_resource( '{0}/{1}/environments/all'.format(octopus_server_uri, space['Id'])) environments = [e['Id'] for e in environments if e['Name'] in environment_names] for environmentId in environments: print('Running runbook {0} in {1}'.format(runbook_name, environmentId)) uri = '{0}/{1}/runbookRuns'.format(octopus_server_uri, space['Id']) runbook_run = { 'RunbookId': runbook['Id'], 'RunbookSnapshotId': runbook['PublishedRunbookSnapshotId'], 'EnvironmentId': environmentId, 'TenantId': tenantId, 'SkipActions': None, 'SpecificMachineIds': None, 'ExcludedMachineIds': None } response = requests.post(uri, headers=headers, json=runbook_run) response.raise_for_status() ```
TypeScript ```typescript import { Client, CreateRunbookRunCommandV1, ReleaseRepository } from '@octopusdeploy/api-client' const configuration: ClientConfiguration = { userAgentApp: 'CustomTypeScript', instanceURL: 'https://your-octopus-url/', apiKey: 'API-YOUR-KEY' }; const client = await Client.create(configuration); const command: CreateRunbookRunCommandV1 = { spaceName: 'Your space Name', ProjectName: 'Your project name', RunbookName: 'Your runbook name', EnvironmentNames: [ "Dev" ] }; const repository = new ReleaseRepository(client, parameters.space) const allocatedReleaseNumber = await repository.create(command) ```
# Run a runbook with prompted variables Source: https://octopus.com/docs/octopus-rest-api/examples/runbooks/run-runbook-with-prompted-variables.md This script demonstrates how to programmatically run a runbook when the runbook has prompted variables. It will also wait for the runbook run to complete. ## Usage Provide values for the following: - Runbook Base URL - Runbook API Key - Name of the space - Name of the project - Name of the runbook - Name of the environment - Wait for finish - Use guided failure mode - Use a published snapshot only - Cancel in seconds - Prompted variables ### Prompted variable format In the PowerShell script the prompted variables should be provided in the format `Name::Value` with a new line separating them: ``` PromptedVariableName::My Super Awesome Value OtherPromptedVariable::Other Super Awesome Value ``` ## Script
PowerShell (REST API) ```powershell [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 $runbookBaseUrl = "" ## The base url, IE https://samples.octopus.app $runbookApiKey = "" ## The API KEY $runbookSpaceName = "Default" ## The name of the space the runbook is located in $runbookProjectName = "Sample Project" ## the name of the project the runbook is located in $runbookRunName = "Sample Name" ## the name of the runbook $runbookEnvironmentName = "" ## The environment name to run the runbook in $runbookTenantName = "" ## Optional - the name of the tenant to run the runbook for $runbookWaitForFinish = $true ## set to either $true or $false $runbookUseGuidedFailure = $false ## set to either $true or $false $runbookUsePublishedSnapshot = $true ## set to either $true or $false $runbookCancelInSeconds = 1800 ## 1800 seconds is 30 minutes $runbookPromptedVariables = "" ## format is "VariableName::VariableValue" function FindMatchingItemByName { param ( [string] $EndPoint, [string] $NameToLookFor, [string] $ItemType, [string] $APIKey, [string] $PullFirstItem ) $fullUrl = "$($EndPoint)?partialName=$NameToLookFor&skip=0&take=10000" Write-Host "Attempting to find $ItemType $NameToLookFor by hitting $fullUrl" $header = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $header.Add("X-Octopus-ApiKey", $APIKey) $itemList = Invoke-RestMethod $fullUrl -Headers $header $foundItem = $null foreach ($item in $itemList.Items) { if ($item.Name -eq $NameToLookFor -or $PullFirstItem) { Write-Host "$ItemType matching $NameToLookFor found" $foundItem = $item break } } if ($foundItem -eq $null) { Write-Host "$ItemType $NameToLookFor not found, exiting with error" exit 1 } return $foundItem } Write-Host "Runbook Name $runbookRunName" Write-Host "Runbook Base Url: $runbookBaseUrl" Write-Host "Runbook Space Name: $runbookSpaceName" Write-Host "Runbook Project Name: $runbookProjectName" Write-Host "Runbook Environment Name: $runbookEnvironmentName" Write-Host "Runbook Tenant Name: $runbookTenantName" Write-Host "Wait for Finish: $runbookWaitForFinish" Write-Host "Use Guided Failure: $runbookUseGuidedFailure" Write-Host "Cancel run in seconds: $runbookCancelInSeconds" Write-Host "Prompted Variables: $runbookPromptedVariables" $header = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $header.Add("X-Octopus-ApiKey", $runbookApiKey) $spaceToUse = FindMatchingItemByName -EndPoint "$runbookBaseUrl/api/spaces" -NameToLookFor $runbookSpaceName -ItemType "Space" -APIKey $runbookApiKey -PullFirstItem $false $runbookSpaceId = $spaceToUse.Id $environmentToUse = FindMatchingItemByName -EndPoint "$runbookBaseUrl/api/$runbookSpaceId/environments" -NameToLookFor $runbookEnvironmentName -ItemType "Environment" -APIKey $runbookApiKey -PullFirstItem $false $environmentIdToUse = $environmentToUse.Id $tenantIdToUse = $null if ([string]::IsNullOrWhiteSpace($runbookTenantName) -eq $false) { $tenantToUse = FindMatchingItemByName -EndPoint "$runbookBaseUrl/api/$runbookSpaceId/tenants" -NameToLookFor $runbookTenantName -ItemType "Tenant" -APIKey $runbookApiKey -PullFirstItem $false $tenantIdToUse = $tenantToUse.Id } $projectToUse = FindMatchingItemByName -EndPoint "$runbookBaseUrl/api/$runbookSpaceId/projects" -NameToLookFor $runbookProjectName -ItemType "Environment" -APIKey $runbookApiKey -PullFirstItem $false $projectIdToUse = $projectToUse.Id $runbookToRun = FindMatchingItemByName -EndPoint "$runbookBaseUrl/api/$runbookSpaceId/projects/$projectIdToUse/runbooks" -NameToLookFor $runbookRunName -ItemType "Runbook" -APIKey $runbookApiKey -PullFirstItem $false $runbookIdToRun = $runbookToRun.Id $runbookProjectId = $runbookToRun.ProjectId $runbookSnapShotIdToUse = $runbookToRun.PublishedRunbookSnapshotId if ($runbookSnapShotIdToUse -eq $null -and $runbookUsePublishedSnapshot -eq $true) { Write-Host "Use Published Snapshot was set; yet the runbook doesn't have a published snapshot. Exiting" Exit 1 } if ($runbookUsePublishedSnapshot -eq $false) { $snapShotToUse = FindMatchingItemByName -EndPoint "$runbookBaseUrl/api/$runbookSpaceId/runbooks/$runbookIdToRun/runbookSnapshots" -NameToLookFor "" -ItemType "Snapshot" -APIKey $runbookApiKey -PullFirstItem $true $runbookSnapShotIdToUse = $snapShotToUse.Id } $projectResponse = Invoke-RestMethod "$runbookBaseUrl/api/$runbookSpaceId/projects/$runbookProjectId" -Headers $header $projectNameForUrl = $projectResponse.Slug $runbookFormValues = @{} if ([string]::IsNullOrWhiteSpace($runbookPromptedVariables) -eq $false) { $runBookPreviewUrl = "$runbookBaseUrl/api/$runbookSpaceId/runbooks/$runbookIdToRun/runbookRuns/preview/$environmentIdToUse" Write-Host "Prompted variables were supplied, hitting the preview endpoint $runbookPreviewUrl" $runBookPreview = Invoke-RestMethod $runbookPreviewUrl -Headers $header $promptedValueList = @(($runbookPromptedVariables -Split "`n").Trim()) Write-Host $promptedValueList.Length foreach($element in $runbookPreview.Form.Elements) { $nameToSearchFor = $element.Control.Name $uniqueName = $element.Name $isRequired = $element.Control.Required $promptedVariableFound = $false Write-Host "Looking for the prompted variable value for $nameToSearchFor" foreach ($promptedValue in $promptedValueList) { $splitValue = $promptedValue -Split "::" Write-Host "Comparing $nameToSearchFor with provided prompted variable $($splitValue[0])" if ($splitValue.Length -gt 1) { if ($nameToSearchFor -eq $splitValue[0]) { Write-Host "Found the prompted variable value $nameToSearchFor" $runbookFormValues[$uniqueName] = $splitValue[1] $promptedVariableFound = $true break } } } if ($promptedVariableFound -eq $false -and $isRequired -eq $true) { Write-Host "Unable to find a value for the required prompted variable $nameToSearchFor, exiting" Exit 1 } } } $runbookBody = @{ RunbookId = $runbookIdToRun; RunbookSnapShotId = $runbookSnapShotIdToUse; FrozenRunbookProcessId = $null; EnvironmentId = $environmentIdToUse; TenantId = $tenantIdToUse; SkipActions = @(); QueueTime = $null; QueueTimeExpiry = $null; FormValues = $runbookFormValues; ForcePackageDownload = $false; ForcePackageRedeployment = $true; UseGuidedFailure = $runbookUseGuidedFailure; SpecificMachineIds = @(); ExcludedMachineIds = @() } $runbookBodyAsJson = $runbookBody | ConvertTo-Json $runbookPostUrl = "$runbookBaseUrl/api/$runbookSpaceId/runbookRuns" Write-Host "Kicking off runbook run by posting to $runbookPostUrl" $runBookResponse = Invoke-RestMethod $runbookPostUrl -Method POST -Headers $header -Body $runbookBodyAsJson $runbookServerTaskId = $runBookResponse.TaskId $runbookRunId = $runbookResponse.Id Write-Host "Runbook was successfully invoked, you can access the launched runbook [here]($runbookBaseUrl/app#/$runbookSpaceId/projects/$projectNameForUrl/operations/runbooks/$runbookIdToRun/snapshots/$runbookSnapShotIdToUse/runs/$runbookRunId)" if ($runbookWaitForFinish -eq $true) { Write-Host "The setting to wait for completion was set, waiting until task has finished" $startTime = Get-Date $currentTime = Get-Date $dateDifference = $currentTime - $startTime $taskStatusUrl = "$runbookBaseUrl/api/tasks/$runbookServerTaskId" $numberOfWaits = 0 $runbookSuccessful = $null While ($dateDifference.TotalSeconds -lt $runbookCancelInSeconds) { Write-Host "Waiting 5 seconds to check status" Start-Sleep -Seconds 5 $taskStatusResponse = Invoke-RestMethod $taskStatusUrl -Headers $header $taskStatusResponseState = $taskStatusResponse.State if ($taskStatusResponseState -eq "Success") { Write-Host "The task has finished with a status of Success" $runbookSuccessful = $true break } elseif($taskStatusResponseState -eq "Failed" -or $taskStatusResponseState -eq "Canceled") { Write-Host "The task has finished with a status of $taskStatusResponseState status, stopping the run/deployment" $runbookSuccessful = $false break } Write-Host "The task state is currently $taskStatusResponseState" $startTime = $taskStatusResponse.StartTime if ($startTime -eq $null) { Write-Host "The task is still queued, let's wait a bit longer" $startTime = Get-Date } $startTime = [DateTime]$startTime $currentTime = Get-Date $dateDifference = $currentTime - $startTime } if ($null -eq $runbookSuccessful) { Write-Host "The cancel timeout has been reached, cancelling the runbook run" $cancelResponse = Invoke-RestMethod "$runbookBaseUrl/api/tasks/$runbookServerTaskId/cancel" -Headers $header -Method Post Write-Host "Exiting with an error code of 1 because we reached the timeout" } } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "Default"; string environmentName = "Development"; string runbookName = "Runbook name"; // Leave blank if you'd like to use the published snapshot string runbookSnapshotId = ""; Dictionary promptedVariables = new Dictionary(); // Enter multiple values using the .Add() method // promptedVariables.Add("prompted-variable1", "variable1-value") // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get runbook var runbook = repositoryForSpace.Runbooks.FindOne(n => n.Name == runbookName); // Get environment var environment = repositoryForSpace.Environments.FindByName(environmentName); // Use published snapshot if no id provided if (string.IsNullOrWhiteSpace(runbookSnapshotId)) { runbookSnapshotId = runbook.PublishedRunbookSnapshotId; } var runbookTemplate = repositoryForSpace.Runbooks.GetRunbookRunTemplate(runbook); var deploymentPromotionTarget = runbookTemplate.PromoteTo.FirstOrDefault(p => p.Name == environmentName); var runbookPreview = repositoryForSpace.Runbooks.GetPreview(deploymentPromotionTarget); var formValues = new Dictionary(); // Associate variable values for the runbook foreach (var variableName in promptedVariables.Keys) { var element = runbookPreview.Form.Elements.FirstOrDefault(e => (e.Control as Octopus.Client.Model.Forms.VariableValue).Name == variableName); if (element != null) { var runbookPromptedVariableId = element.Name; var runbookPromptedVariableValue = promptedVariables[variableName]; formValues.Add(runbookPromptedVariableId, runbookPromptedVariableValue); } } // Create runbook run object Octopus.Client.Model.RunbookRunResource runbookRun = new RunbookRunResource(); runbookRun.EnvironmentId = environment.Id; runbookRun.RunbookId = runbook.Id; runbookRun.ProjectId = runbook.ProjectId; runbookRun.RunbookSnapshotId = runbookSnapshotId; runbookRun.FormValues = formValues; // Execute runbook repositoryForSpace.RunbookRuns.Create(runbookRun); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
# Add a Space with environments Source: https://octopus.com/docs/octopus-rest-api/examples/spaces/add-a-space-with-environments.md This script is a starter for bootstrapping a new [Space](/docs/administration/spaces) in your Octopus instance. It creates a new space with the provided name, description, and managers. At least one manager team or member must be provided. Then the script will create the [Environments](/docs/infrastructure/environments) provided in the newly created space. ## Usage Provide values for: - Octopus URL - Octopus API Key - Space Name - Environments - A combination of Manager Teams and Manager Team Members ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "New Space" $description = "Space for the new, top secret project." $managersTeams = @() # an array of team Ids to add to Space Managers $managerTeamMembers = @() # an array of user Ids to add to Space Managers $environments = @('Development', 'Test', 'Production') $body = @{ Name = $spaceName Description = $description SpaceManagersTeams = $managersTeams SpaceManagersTeamMembers = $managerTeamMembers IsDefault = $false TaskQueueStopped = $false } | ConvertTo-Json $response = try { Write-Host "Creating space '$spaceName'" (Invoke-WebRequest $octopusURL/api/spaces -Headers $header -Method Post -Body $body -ErrorVariable octoError) } catch [System.Net.WebException] { $_.Exception.Response } if ($octoError) { Write-Host "An error was encountered trying to create the space: $($octoError.Message)" exit } $space = $response.Content | ConvertFrom-Json foreach ($environment in $environments) { $body = @{ Name = $environment } | ConvertTo-Json Write-Host "Creating environment '$environment'" $response = try { (Invoke-WebRequest "$octopusURL/api/$($space.Id)/environments" -Headers $header -Method Post -Body $body -ErrorVariable octoError) } catch [System.Net.WebException] { $_.Exception.Response } if ($octoError) { Write-Host "An error was encountered trying to create the environment: $($octoError.Message)" exit } } ```
PowerShell (Octopus.Client) ```powershell Add-Type -Path 'path\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $spaceName = "New Space" $description = "Space for the new, top secret project." $managersTeams = @() # an array of team Ids to add to Space Managers $managerTeamMembers = @() # an array of user Ids to add to Space Managers $environments = @('Development', 'Test', 'Production') $space = New-Object Octopus.Client.Model.SpaceResource -Property @{ Name = $spaceName Description = $description SpaceManagersTeams = New-Object Octopus.Client.Model.ReferenceCollection($managersTeams) SpaceManagersTeamMembers = New-Object Octopus.Client.Model.ReferenceCollection($managerTeamMembers) IsDefault = $false TaskQueueStopped = $false }; try { $space = $repository.Spaces.Create($space) } catch { Write-Host $_.Exception.Message exit } $repositoryForSpace = [Octopus.Client.OctopusRepositoryExtensions]::ForSpace($repository, $space) foreach ($environmentName in $environments) { $environment = New-Object Octopus.Client.Model.EnvironmentResource -Property @{ Name = $environmentName } try { $repositoryForSpace.Environments.Create($environment) } catch { Write-Host $_.Exception.Message exit } } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var spaceName = "New Space"; var description = "Space for the new, top secret project."; var managersTeams = new string[] { }; var managersTeamMembers = new string[] { }; var environments = new string[] { "Development", "Test", "Production" }; var space = new SpaceResource { Name = spaceName, Description = description, SpaceManagersTeams = new ReferenceCollection(managersTeams), SpaceManagersTeamMembers = new ReferenceCollection(managersTeamMembers), IsDefault = false, TaskQueueStopped = false }; try { Console.WriteLine($"Creating space '{spaceName}'."); space = repository.Spaces.Create(space); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } var repositoryForSpace = repository.ForSpace(space); foreach(var environmentName in environments) { var environment = new EnvironmentResource { Name = environmentName }; try { Console.WriteLine($"Creating environment '{environmentName}'."); repositoryForSpace.Environments.Create(environment); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } } ```
Python3 ```python import json import requests # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} # Define working variables space_name = "My New Space" space_description = "Description of My New Space" managers_teams = [] # Either this or manager_team_members must be populated otherwise you'll receive a 400 manager_team_members = [] # Either this or managers_teams must be populated otherwise you'll receive a 400 environments = ['Development', 'Test', 'Production'] # Define space JSON space = { 'Name' : space_name, 'Description' : space_description, 'SpaceManagersTeams' : managers_teams, 'SpaceManagersTeamMembers' : manager_team_members, 'IsDefault' : False, 'TaskQueueStopped' : False } # Create the space uri = '{0}/spaces'.format(octopus_server_uri) response = requests.post(uri, headers=headers, json=space) response.raise_for_status() # Get the response object octopus_space = json.loads(response.content.decode('utf-8')) # Loop through environments for environment in environments: environmentJson = { 'Name': environment } # Format the uri uri = '{0}/{1}/environments'.format(octopus_server_uri, octopus_space['Id']) # Create the environment response = requests.post(uri, headers=headers, json=environmentJson) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { spaceId := "" // Update if authentication is in a different space newSpaceName := "MyNewSpace" apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceManagersTeamMembers := []string{} // This or spaceManagerTeams must contain a value spaceManagerTeams := []string{"teams-administrators"} // This or spaceManagersTeamMembers must contain a value, "teams-administrators" is the Octopus Administrators team environments := []string{"Development", "Test", "Production"} octopusAuth(apiURL, APIKey, spaceId) // Though blank, spaceId is required to be passed, blank = Default space := CreateSpace(apiURL, APIKey, spaceId, newSpaceName, spaceManagersTeamMembers[:], spaceManagerTeams[:]) for i := 0; i < len(environments); i++ { CreateEnvironment(apiURL, APIKey, space, environments[i]) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func CreateSpace(octopusURL *url.URL, APIKey, spaceId string, spaceName string, spaceManagersTeamMembers []string, spaceManagersTeams []string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, spaceId) Space := octopusdeploy.NewSpace(spaceName) // Loop through team members array for i := 0; i < len(spaceManagersTeamMembers); i++ { Space.SpaceManagersTeamMembers = append(Space.SpaceManagersTeamMembers, spaceManagersTeamMembers[i]) } // Loop through teams array for i := 0; i < len(spaceManagersTeams); i++ { Space.SpaceManagersTeams = append(Space.SpaceManagersTeams, spaceManagersTeams[i]) } fmt.Println("Creating space: " + spaceName) Space, err := client.Spaces.Add(Space) if err != nil { log.Println(err) } return Space } func CreateEnvironment(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, environmentName string) { // Create client object client := octopusAuth(octopusURL, APIKey, space.ID) // Create new Environment object environment := octopusdeploy.NewEnvironment(environmentName) fmt.Println("Creating environment: " + environmentName) // Add to space environment, err := client.Environments.Add(environment) if err != nil { log.Println(err) } } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Environment; import com.octopus.sdk.domain.Space; import com.octopus.sdk.domain.User; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.environment.EnvironmentResource; import com.octopus.sdk.model.space.SpaceOverviewResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Set; import com.google.common.collect.Sets; public class AddSpaceWithEnvironments { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final User currentUser = repo.users().getCurrentUser(); final Set spaceManagers = Sets.newHashSet(currentUser.getProperties().getId()); final Space createdSpace = repo.spaces().create(new SpaceOverviewResource("TheSpaceName", spaceManagers)); final Environment testEnv = createdSpace.environments().create(new EnvironmentResource("Test")); final Environment prodEnv = createdSpace.environments().create(new EnvironmentResource("Production")); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Delete a Space Source: https://octopus.com/docs/octopus-rest-api/examples/spaces/delete-a-space.md This script deletes a [Space](/docs/administration/spaces) from your Octopus instance. ## Usage Provide values for: - Octopus URL - Octopus API Key - Space Name to delete the space with the given name. :::div{.warning} **Be very careful when deleting a Space. This operation is destructive and cannot be undone.** ::: ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "New Space" Write-Host "Getting space '$spaceName'" $spaces = (Invoke-WebRequest $octopusURL/api/spaces?take=21000 -Headers $header -Method Get -ErrorVariable octoError).Content | ConvertFrom-Json $space = $spaces.Items | Where-Object Name -eq $spaceName if ($null -eq $space) { Write-Host "Could not find space with name '$spaceName'" exit } $space.TaskQueueStopped = $true $body = $space | ConvertTo-Json Write-Host "Stopping space task queue" (Invoke-WebRequest $octopusURL/$($space.Links.Self) -Headers $header -Method PUT -Body $body -ErrorVariable octoError) | Out-Null Write-Host "Deleting space" (Invoke-WebRequest $octopusURL/$($space.Links.Self) -Headers $header -Method DELETE -ErrorVariable octoError) | Out-Null Write-Host "Action Complete" ```
PowerShell (Octopus.Client) ```powershell Add-Type -Path 'path\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $spaceName = "New Space" $space = $repository.Spaces.FindByName($spaceName) if ($null -eq $space) { Write-Host "The space $spaceName does not exist." exit } try { $space.TaskQueueStopped = $true $repository.Spaces.Modify($space) | Out-Null $repository.Spaces.Delete($space) | Out-Null } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; var OctopusURL = "https://your-octopus-url"; var OctopusAPIKey = "API-YOUR-KEY"; var endpoint = new OctopusServerEndpoint(OctopusURL, OctopusAPIKey); var repository = new OctopusRepository(endpoint); var spaceName = "New Space"; try { Console.WriteLine($"Getting space '{spaceName}'."); var space = repository.Spaces.FindByName(spaceName); if (space == null) { Console.WriteLine($"Could not find space '{spaceName}'."); return; } Console.WriteLine("Stopping task queue."); space.TaskQueueStopped = true; repository.Spaces.Modify(space); Console.WriteLine("Deleting space"); repository.Spaces.Delete(space); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Your Space name" def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) space['TaskQueueStopped'] = True # update task queue to stopped uri = '{0}/spaces/{1}'.format(octopus_server_uri, space['Id']) response = requests.put(uri, headers=headers, json=space) response.raise_for_status() # Delete space response = requests.delete(uri, headers=headers) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "MySpace" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) space.TaskQueueStopped = true // Create client object client := octopusAuth(apiURL, APIKey, "") client.Spaces.Update(space) deleteErr := client.Spaces.DeleteByID(space.ID) if deleteErr != nil { log.Println(deleteErr) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Space; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class DeleteSpace { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } space.get().getProperties().setTaskQueueStopped(true); final Space stoppedSpace = repo.spaces().update(space.get().getProperties()); repo.spaces().delete(stoppedSpace.getProperties()); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Export step templates Source: https://octopus.com/docs/octopus-rest-api/examples/step-templates/export-step-templates.md This script demonstrates how to export all step templates in a Space to files. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the Space to use ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object { $_.Name -eq $spaceName } # Get step templates $templates = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/actiontemplates?take=250" -Headers $header) mkdir "$PSScriptRoot/step-templates" $templates.Items | ForEach-Object { $template = $_ $name = $template.Name.Replace(" ", "-") Write-Host "Writing $PSScriptRoot/step-templates/$name.json" ($template | ConvertTo-Json -Depth 100) | Out-File -FilePath "$PSScriptRoot/step-templates/$name.json" } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $exportPath = "path:\to\export-to" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Check to make sure folder exists if ((Test-Path -Path $exportPath) -eq $false) { # Create folder New-Item -Path $exportPath -ItemType Directory } # Get the templates $templates = $repositoryForSpace.ActionTemplates.GetAll() # Loop through templates foreach ($template in $templates) { $fileName = $template.Name.Replace(" ", "-") Write-Host "Writing $exportPath/$fileName.json" $template | ConvertTo-Json | Out-File -FilePath "$exportPath/$fileName.json" } ```
Python3 ```python import json import requests from requests.api import get, head import re import os import string def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" export_path = "path:\\to\\templates" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get templates uri = '{0}/api/{1}/actiontemplates'.format(octopus_server_uri, space['Id']) action_templates = get_octopus_resource(uri, headers) # Check to see if folder exists if not os.path.isdir(export_path): os.makedirs(export_path) # Loop through templates for action_template in action_templates: fileName = str.replace(action_template['Name'], ' ', '-') print ('Writing {0}\\{1}.json'.format(export_path, fileName)) with open("{0}\\{1}.json".format(export_path, fileName), "w") as outfile: json.dump(action_template, outfile) ```
Go ```go package main import ( "bytes" "encoding/json" "fmt" "io" "log" "net/url" "os" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" exportPath := "path:\\to\\templates" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get action templates actionTemplates, err := client.ActionTemplates.GetAll() if err != nil { log.Println(err) } // Check to see if folder exists if !FolderExists(exportPath) { err := os.Mkdir(exportPath, 0755) if err != nil { log.Println(err) } } // Loop through action templates for i := 0; i < len(actionTemplates); i++ { fileName := strings.Replace(actionTemplates[i].Name, " ", "-", -1) jsonBody, err := json.Marshal(actionTemplates[i]) if err != nil { log.Println(err) } byteReader := bytes.NewReader(jsonBody) // Create file out, err := os.Create(exportPath + "\\" + fileName + ".json") if err != nil { log.Println(err) } defer out.Close() // Write to file fmt.Println("Writing " + exportPath + "\\" + fileName + ".json") _, err = io.Copy(out, byteReader) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func FolderExists(folderPath string) bool { info, err := os.Stat(folderPath) if os.IsNotExist(err) { return false } return info.IsDir() } ```
# Create a tag set Source: https://octopus.com/docs/octopus-rest-api/examples/tagsets/create-tagset.md This script demonstrates how to programmatically create a tag set in Octopus Deploy. :::div{.hint} From Octopus Cloud version **2025.4.3897**, `Type` and `Scopes` parameters can be included to configure the type of and scoping of a tag set when created via the API. The API will ignore the `Type` and `Scopes` parameters if the `Extended Tag Sets` feature toggle is disabled. ::: ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the space to use - Name of the tag set to create - Type for the tag set (MultiSelect, SingleSelect, or FreeText) - Scopes for the tag set (Tenant, Environment, Project, or any combination) - Optional description for the tag set - Optional tags to add to the new tag set ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $tagsetName = "Upgrade Ring" $tagsetDescription = "Describes which upgrade ring the tenant belongs to" $tagsetType = "MultiSelect" # Options: MultiSelect, SingleSelect, FreeText $tagsetScopes = @("Tenant") # Options: Tenant, Environment, Project (can specify multiple) # Optional Tags to add in the format "Tag name", "Tag Color" $optionalTags = @{} $optionalTags.Add("Early Adopter", "#ECAD3F") $optionalTags.Add("Stable", "#36A766") # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # See if tagset already exists $tagsetResults = (Invoke-RestMethod -Method Get "$octopusURL/api/$($space.Id)/tagsets?partialName=$tagsetName" -Headers $header) if( $tagsetResults.TotalResults -gt 0) { throw "Existing tagset results found matching '$($tagsetName)'!" } $tags = @() if($optionalTags.Count -gt 0) { foreach ($tagName in $optionalTags.Keys) { $tag = @{ Id = $null Name = $tagName Color = $optionalTags.Item($tagName) Description = "" CanonicalTagName = $null } $tags += $tag } } # Create tagset json payload $jsonPayload = @{ Name = $tagsetName Description = $tagsetDescription Type = $tagsetType Scopes = $tagsetScopes Tags = $tags } # Create tagset Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/tagsets" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $tagsetName = "Upgrade Ring" $tagsetDescription = "Describes which upgrade ring the tenant belongs to" $tagsetType = "MultiSelect" # Options: MultiSelect, SingleSelect, FreeText $tagsetScopes = @("Tenant") # Options: Tenant, Environment, Project (can specify multiple) # Optional Tags to add in the format "Tag name", "Tag Color" $optionalTags = @{} $optionalTags.Add("Early Adopter", "#ECAD3F") $optionalTags.Add("Stable", "#36A766") $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Create or modify tagset $tagsetEditor = $repositoryForSpace.TagSets.CreateOrModify($tagsetName, $tagsetDescription) # Add optional tags if($optionalTags.Count -gt 0) { foreach ($tagName in $optionalTags.Keys) { $tagsetEditor.AddOrUpdateTag($tagName, "", $optionalTags.Item($tagName)) } } $tagsetEditor.Save() } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var tagsetName = "Upgrade Ring"; var tagsetDescription = "Describes which upgrade ring the tenant belongs to"; var tagsetType = "MultiSelect"; // Options: MultiSelect, SingleSelect, FreeText var tagsetScopes = new[] { "Tenant" }; // Options: Tenant, Environment, Project (can specify multiple) // Optional Tags to add in the format "Tag name", "Tag Color" var optionalTags = new Dictionary(); optionalTags.Add("Early Adopter", "#ECAD3F"); optionalTags.Add("Stable", "#36A766"); // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Create or modify tagset var tagsetEditor = repositoryForSpace.TagSets.CreateOrModify(tagsetName, tagsetDescription); // Add optional tags foreach (var tag in optionalTags) { tagsetEditor.AddOrUpdateTag(tag.Key, "", tag.Value); } tagsetEditor.Save(); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" tagset_name = "MyTagset" tagset_description = "My description" tagset_type = "MultiSelect" # Options: MultiSelect, SingleSelect, FreeText tagset_scopes = ["Tenant"] # Options: Tenant, Environment, Project (can specify multiple) tags = [] tag = { 'Id': None, 'Name': 'Tag1', 'Color': '#ECAD3F', 'Description': 'Tag Description', 'CanonicalTagName': None } tags.append(tag) # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Check to see if tagset exists uri = '{0}/api/{1}/tagsets?partialName={2}'.format(octopus_server_uri, space['Id'], tagset_name) tagsets = get_octopus_resource(uri, headers) if len(tagsets) == 0: tagset = { 'Name': tagset_name, 'Description': tagset_description, 'Type': tagset_type, 'Scopes': tagset_scopes, 'Tags': tags } uri = '{0}/api/{1}/tagsets'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=tagset) response.raise_for_status() else: print ('{0} already exists!'.format(tagset_name)) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" tagsetName := "MyTagset" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get tagset tagset := GetTagSet(apiURL, APIKey, space, tagsetName, 0) if tagset == nil { // Create new tagset tagset = octopusdeploy.NewTagSet(tagsetName) // Create new tag tag := octopusdeploy.Tag{ Name: "MyTag", Color: "#ECAD3F", Description: "My tag description", } // Add to set tagset.Tags = append(tagset.Tags, tag) // Add to server client.TagSets.Add(tagset) } else { fmt.Println("Tagset " + tagsetName + " already exists!") } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetTagSet(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, tagsetName string, skip int) *octopusdeploy.TagSet { // Get client for space client := octopusAuth(octopusURL, APIKey, space.ID) // Get tagsets tagsetQuery := octopusdeploy.TagSetsQuery { PartialName: tagsetName, } tagsets, err := client.TagSets.Get(tagsetQuery) if err != nil { log.Println(err) } if len(tagsets.Items) == tagsets.ItemsPerPage { // call again tagset := GetTagSet(octopusURL, APIKey, space, tagsetName, (skip + len(tagsets.Items))) if tagset != nil { return tagset } } else { // Loop through returned items for _, tagset := range tagsets.Items { if tagset.Name == tagsetName { return tagset } } } return nil } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Space; import com.octopus.sdk.domain.TagSet; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.tag.TagResource; import com.octopus.sdk.model.tagset.TagSetResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Optional; public class CreateTagSet { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final TagSetResource newTagSet = new TagSetResource("TheTagSet"); newTagSet.addTagsItem(new TagResource("FirstTag", "#ECAD3F")); final TagSet tagSet = space.get().tagSets().create(newTagSet); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Reprioritize Tasks Source: https://octopus.com/docs/octopus-rest-api/examples/tasks/reprioritize-tasks.md :::div{.hint} Support for [prioritizing tasks](/docs/tasks/prioritize-tasks) directly in Octopus is available from **2023.4**. ::: This script can be used to move critical deployments from the bottom of the queue to the top of the queue. How it works: 1. Look at all the pending tasks in the queue. All in-process items are left as is. 2. If there are any deployments or runbook runs, it will check to see if they match specified criteria. 3. If any matching runbook runs or deployment tasks are found, it will loop through the queue and cancel all the items before them. 4. If the script cancels any runbook runs or deployments, it will resubmit them using the same values. For example, you have this in your pending queue: 1. Deployment to Dev 2. Runbook run on Maintenance 3. Retention policy run 4. Deployment to Production It will cancel the deployment to `Dev`, the runbook run on `Maintenance`, and the retention policy run. It will then resubmit the deployment to `Dev,` and the runbook run on `Maintenance` using the same parameters. The user who appears in the audit log will be the one attached to the API key. ## Usage Provide values for: - Octopus URL (required) - Octopus API Key (required) The script looks for tasks one of two ways: 1. Task Id List - comma separated list of task ids to move to the top of the queue 2. Matching based on criteria, see below Matching based on criteria - Space List - comma separated list of spaces to look for (optional) - Environment List - comma separated list of environments to look for (optional) - Options: - `EnvironmentName` - eg `Production` - looks for the `Production` environment in any space from the space list. - `EnvironmentName::SpaceName` - eg `Production::Default` - looks for the `Production` environment in the `Default` space only. - Project List - comma separated list of projects to look for (optional) - Options: - `ProjectName` - eg `Hello World` - looks for the `Hello World` project in any space from the space list. - `ProjectName::SpaceName` - eg `Hello World::Default` - looks for the `Hello World` project in the `Default` space only. - Tenant List - comma separated list of tenants to look for (optional) - Options: - `TenantName` - eg `My Tenant` - looks for the `My Tenant` tenant in any space from the space list. - `TenantName::SpaceName` - eg `My Tenant::Default` - looks for the `My Tenant` tenant in the `Default` space only. - Match Type - how the match will happen - Options: - `Or` - will look for runbook run or deployment that matches any of the filters - eg `Production` OR `Hello World` OR `My Tenant` - `And` - will look for runbook run or deployment that matches all filters - eg `Production` AND `Hello World` AND `My Tenant` - If a filter isn't supplied it is excluded from the check - Task Type - what task type to look for - Options: - `Deploy` - looks for deployments only - `RunbookRun` - looks for runbook runs only - `Both` - looks for both deployments and runbooks runs You must supply at least one task id OR at least one filter for environments OR projects OR tenants must be supplied. ## Script
PowerShell (REST API) ```powershell [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 $octopusUrl = "https://your-octopus-url" ## Octopus URL to look at $octopusApiKey = "API-YOUR-KEY" ## API key of user who has permissions to view all spaces, cancel tasks, and resubmit runbooks runs and deployments $spaceList = "Default" ## Comma separated list of spaces to monitor $environmentList = "Production" ## Comma separated list of environments to look for (can be blank) $projectList = "" ## Comma separated list of projects to look for (can be blank) $tenantList = "" ## Comma separated list of tenants to look for (can be blank) $matchType = "Or" ## How you want to match, OR = Task matches Environment OR Project OR Tenant; AND = task matches Environment AND Project AND Tenant (when supplied) $taskType = "Both" ## Look for runbook runs, or deployments or Both. Options are Both, Deploy, RunbookRun $taskIdList = "" ## Comma separated list of task ids to move to the top of the queue $cachedResults = @{} function Write-OctopusVerbose { param($message) Write-Host $message } function Write-OctopusInformation { param($message) Write-Host $message } function Write-OctopusSuccess { param($message) Write-Host $message } function Write-OctopusWarning { param($message) Write-Warning "$message" } function Write-OctopusCritical { param ($message) Write-Error "$message" } function Invoke-OctopusApi { param ( $octopusUrl, $endPoint, $spaceId, $apiKey, $method, $item, $ignoreCache ) $octopusUrlToUse = $OctopusUrl if ($OctopusUrl.EndsWith("/")) { $octopusUrlToUse = $OctopusUrl.Substring(0, $OctopusUrl.Length - 1) } if ([string]::IsNullOrWhiteSpace($SpaceId)) { $url = "$octopusUrlToUse/api/$EndPoint" } else { $url = "$octopusUrlToUse/api/$spaceId/$EndPoint" } try { if ($null -ne $item) { $body = $item | ConvertTo-Json -Depth 10 Write-OctopusVerbose $body Write-OctopusInformation "Invoking $method $url" return Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -Body $body -ContentType 'application/json; charset=utf-8' } if (($null -eq $ignoreCache -or $ignoreCache -eq $false) -and $method.ToUpper().Trim() -eq "GET") { Write-OctopusVerbose "Checking to see if $url is already in the cache" if ($cachedResults.ContainsKey($url) -eq $true) { Write-OctopusVerbose "$url is already in the cache, returning the result" return $cachedResults[$url] } } else { Write-OctopusVerbose "Ignoring cache." } Write-OctopusVerbose "No data to post or put, calling bog standard Invoke-RestMethod for $url" $result = Invoke-RestMethod -Method $method -Uri $url -Headers @{"X-Octopus-ApiKey" = "$ApiKey" } -ContentType 'application/json; charset=utf-8' if ($cachedResults.ContainsKey($url) -eq $true) { $cachedResults.Remove($url) } Write-OctopusVerbose "Adding $url to the cache" $cachedResults.add($url, $result) return $result } catch { if ($null -ne $_.Exception.Response) { if ($_.Exception.Response.StatusCode -eq 401) { Write-OctopusCritical "Unauthorized error returned from $url, please verify API key and try again" } elseif ($_.Exception.Response.statusCode -eq 403) { Write-OctopusCritical "Forbidden error returned from $url, please verify API key and try again" } else { Write-OctopusVerbose -Message "Error calling $url $($_.Exception.Message) StatusCode: $($_.Exception.Response.StatusCode )" } } else { Write-OctopusVerbose $_.Exception } } Throw "There was an error calling the Octopus API please check the log for more details" } function Get-FilteredOctopusItem { param( $itemList, $itemName ) if ($itemList.Items.Count -eq 0) { Write-OctopusCritical "Unable to find $itemName. Exiting with an exit code of 1." return $null } $item = $itemList.Items | Where-Object { $_.Name -eq $itemName} if ($null -eq $item) { Write-OctopusCritical "Unable to find $itemName. Exiting with an exit code of 1." return $null } return $item } function Get-OctopusItemByName { param( $itemName, $itemType, $endpoint, $spaceId, $octopusUrl, $octopusApiKey ) if ([string]::IsNullOrWhiteSpace($itemName)) { return $null } Write-OctopusInformation "Attempting to find $itemType with the name of $itemName" $itemList = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint "$($endPoint)?partialName=$([uri]::EscapeDataString($itemName))&skip=0&take=100" -spaceId $spaceId -apiKey $octopusApiKey -method "GET" $item = Get-FilteredOctopusItem -itemList $itemList -itemName $itemName if ($null -eq $item) { Write-OctopusInformation "Unable to find $itemType $itemName" return $null } Write-OctopusInformation "Successfully found $itemType $itemName with an id of $($item.Id)" return $item } function Get-SplitItemIntoArray { param ( $itemToSplit ) if ($itemToSplit -like "*`n*") { return @(($itemToSplit -Split "`n").Trim()) } if ($itemToSplit -like "*,*") { return @(($itemToSplit -Split ",").Trim()) } return @($itemToSplit) } function Get-OctopusSpaceList { param( $spaceList, $octopusUrl, $octopusApiKey ) if ([string]::IsNullOrWhiteSpace($spaceList)) { $rawOctopusSpaceList = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint "spaces?skip=0&take=10000" -spaceId $null -apiKey $octopusApiKey -method "GET" return $rawOctopusSpaceList.Items } $spaceListSplit = @(Get-SplitItemIntoArray -itemToSplit $spaceList) $returnList = @() foreach ($spaceName in $spaceListSplit) { if ([string]::IsNullOrWhiteSpace($spaceName) -eq $false) { $octopusSpace = Get-OctopusItemByName -itemName $spaceName -itemType "Space" -endpoint "spaces" -spaceId $null -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey if ($null -ne $octopusSpace) { $returnList += $octopusSpace } } } return $returnList } function Get-OctopusItemList { param( $octopusSpaceList, $itemList, $itemType, $endpoint, $octopusApiKey, $octopusUrl ) if ([string]::IsNullOrWhiteSpace($itemList)) { Write-Host "The list for $itemType was empty" return @() } $itemListSplit = @(Get-SplitItemIntoArray -itemToSplit $itemList) $returnList = @() foreach ($itemName in $itemListSplit) { $splitItem = $itemName -split "::" if ($splitItem.Count -gt 1 -and [string]::IsNullOrWhiteSpace($splitItem[1]) -eq $false) { Write-OctopusInformation "The item $itemName included a space name, only pulling back the information for that space" $spaceId = $octopusSpaceList | Where-Object { $_.Name.ToLower().Trim() -eq $splitItem[1].ToLower().Trim() } if ($null -eq $spaceId) { Write-OctopusInformation "The space name $($splitItem[1]) was not included in the space filter. Skipping this option." continue } $octopusItem = Get-OctopusItemByName -itemName $splitItem[0] -itemType $itemType -endpoint $endpoint -spaceId $spaceId -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey if ($null -ne $octopusItem) { $returnList += $octopusItem } continue } foreach ($space in $octopusSpaceList) { $octopusItem = Get-OctopusItemByName -itemName $itemName -itemType $itemType -endpoint $endpoint -spaceId $space.Id -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey if ($null -ne $octopusItem) { $returnList += $octopusItem } } } return $returnList } function Get-QueuedOctopusTasks { param ( $octopusApiKey, $octopusUrl ) $queuedTasks = Invoke-OctopusApi -octopusUrl $octopusUrl -endPoint "Tasks?states=Queued&skip=0&take=1000" -spaceId $null -apiKey $octopusApiKey -method "GET" -ignoreCache $true $returnList = @() $currentTime = $(Get-Date).ToUniversalTime() Write-OctopusInformation "Looping through the found items in reverse order because the Queue is FIFO but the return object is ordered by date DESC" for($i = $queuedTasks.Items.Count - 1; $i -ge 0; $i--) { $task = $queuedTasks.Items[$i] if ($null -ne $task.QueueTime) { $compareTime = [DateTime]::Parse($task.QueueTime) $compareTime = $compareTime.ToUniversalTime() Write-OctopusVerbose "Checking to see if $compareTime is ahead of the $currentTime" if ($compareTime -gt $currentTime) { Write-OctopusInformation "The queued task $($task.Id) has a queue time $($task.QueueTime) in the future. That means this is a scheduled deployment. Skipping this task." continue } } if ($null -ne $task.StartTime) { Write-OctopusInformation "The queued task $($task.Id) has a start time, meaning it was picked up, work was done, then it was added back to the queue. Skipping." continue } if ($true -eq $task.HasPendingInterruptions) { Write-OctopusInformation "The task $($task.Id) has pending interruptions, this means the deployment has started and is awaiting someone to respond. Skipping this task." continue } $returnList += $task } return $returnList } function Test-OctopusListHasId { param ( $octopusList, $octopusId ) $findItem = $octopusList | Where-Object { $_.Id -eq $octopusId } if ($null -eq $findItem) { return $false } return $true } function Get-RunbookRunDetailsFromTask { param ( $runbookTask, $octopusUrl, $octopusApiKey ) return Invoke-OctopusApi -endPoint "runbookRuns/$($runbookTask.Arguments.RunbookRunId)" -octopusUrl $octopusUrl -spaceId $runbookTask.SpaceId -apiKey $octopusApiKey -method "GET" } function Get-DeploymentDetailsFromTask { param ( $deploymentTask, $octopusUrl, $octopusApiKey ) return Invoke-OctopusApi -endPoint "deployments/$($deploymentTask.Arguments.DeploymentId)" -octopusUrl $octopusUrl -spaceId $deploymentTask.SpaceId -apiKey $octopusApiKey -method "GET" } Write-OctopusInformation "Current Task Id: $currentTaskId" Write-OctopusInformation "Space List: $spaceList" Write-OctopusInformation "Environment List: $environmentList" Write-OctopusInformation "Project List: $projectList" Write-OctopusInformation "Tenant List: $tenantList" Write-OctopusInformation "Octopus URL: $octopusUrl" Write-OctopusInformation "Match Type: $matchType" Write-OctopusInformation "Task Id List: $taskIdList" $queuedTasks = @(Get-QueuedOctopusTasks -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl) if ($queuedTasks.Length -eq 0) { Write-OctopusSuccess "No queued tasks found that can block a deployment. Exiting." exit 1 } $octopusInformation = @{ TaskIdList = @(Get-SplitItemIntoArray -itemToSplit $taskIdList) } if ([string]::IsNullOrWhiteSpace($taskIdList)) { $octopusInformation.SpaceList = @(Get-OctopusSpaceList -spaceList $spaceList -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey) $octopusInformation.EnvironmentList = @(Get-OctopusItemList -octopusSpaceList $octopusInformation.SpaceList -itemList $environmentList -itemType "Environment" -endpoint "environments" -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl) $octopusInformation.HasEnvironmentFilter = $octopusInformation.EnvironmentList.Count -gt 0 $octopusInformation.ProjectList = @(Get-OctopusItemList -octopusSpaceList $octopusInformation.SpaceList -itemList $projectList -itemType "Project" -endpoint "projects" -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl) $octopusInformation.HasProjectFilter = $octopusInformation.ProjectList.Count -gt 0 $octopusInformation.TenantList = @(Get-OctopusItemList -octopusSpaceList $octopusInformation.SpaceList -itemList $tenantList -itemType "Tenant" -endpoint "tenants" -octopusApiKey $octopusApiKey -octopusUrl $octopusUrl) $octopusInformation.HasTenantFilter = $octopusInformation.TenantList.Count -gt 0 if ($octopusInformation.EnvironmentList.Count -eq 0 -and $octopusInformation.ProjectList.Count -eq 0 -and $octopusInformation.TenantList.Count -eq 0) { Write-OctopusCritical "No environments OR projects OR tenants provided to filter on. You must provide at least one environment OR project OR tenant." exit 1 } Write-OctopusSuccess "Going to look for any $taskType in the spaces ($(($octopusInformation.SpaceList | Select-Object -ExpandProperty Id) -join ", ")) matching " Write-OctopusSuccess "Environments ($(($octopusInformation.EnvironmentList | Select-Object -ExpandProperty Id) -join " OR ")) $matchType" Write-OctopusSuccess "Projects ($(($octopusInformation.ProjectList | Select-Object -ExpandProperty Id) -join " OR ")) $matchType" Write-OctopusSuccess "Tenants ($(($octopusInformation.TenantList | Select-Object -ExpandProperty Id) -join " OR "))" } else { Write-OctopusSuccess "Going to look for the tasks ($($octopusInformation.TaskIdList -join ", "))" } $matchingTasks = @() Write-OctopusInformation "Attempting to find any matching tasks based on the filtering criteria." foreach ($task in $queuedTasks) { if ($octopusInformation.TaskIdList -contains $task.Id) { Write-OctopusInformation "The task $($task.Id) was found in the list of task ids. Adding to list." $matchingTasks += $task continue } if ($task.Name -ne "Deploy" -and $task.Name -ne "RunbookRun") { Write-Information "The task not a deployment or a runbook run. It is $($task.Description). Moving onto next task." continue } if ($taskType -ne "Both" -and $taskType -ne $task.Name) { Write-Information "You have selected to filter on $taskType only and this task is a $($task.Name). Moving onto the next task." continue } if ((Test-OctopusListHasId -octopusList $octopusInformation.SpaceList -octopusId $task.SpaceId) -eq $false) { Write-Information "The task is not for any spaces specified. Moving onto the next task." continue } if ($task.Name -eq "RunbookRun") { $itemDetails = Get-RunbookRunDetailsFromTask -runbookTask $task -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey } else { $itemDetails = Get-DeploymentDetailsFromTask -deploymentTask $task -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey } $matchesEnvironmentFilter = $octopusInformation.HasEnvironmentFilter -eq $true -and (Test-OctopusListHasId -octopusList $octopusInformation.EnvironmentList -octopusId $itemDetails.EnvironmentId) Write-OctopusInformation "$($task.Name) $($itemDetails.Id) Matches Environment Filter $matchesEnvironmentFilter" $matchesProjectFilter = $octopusInformation.HasProjectFilter -eq $true -and (Test-OctopusListHasId -octopusList $octopusInformation.ProjectList -octopusId $itemDetails.ProjectId) Write-OctopusInformation "$($task.Name) $($itemDetails.Id) Matches Project Filter $matchesProjectFilter" $matchesTenantFilter = $octopusInformation.HasTenantFilter -eq $true -and $null -ne $itemDetails.TenantId -and (Test-OctopusListHasId -octopusList $octopusInformation.TenantList -octopusId $itemDetails.TenantId) Write-OctopusInformation "$($task.Name) $($itemDetails.Id) Matches Tenant Filter $matchesTenantFilter" if ($matchType -eq "Or" -and ($matchesTenantFilter -eq $true -or $matchesProjectFilter -eq $true -or $matchesEnvironmentFilter -eq $true)) { Write-OctopusInformation "The match type was OR and one of the filters matched, adding this task to the matching list" $matchingTasks += $task continue } Write-OctopusInformation "The match type is AND, checking to see if the task matches all the filters" if ($octopusInformation.HasEnvironmentFilter -eq $true -and $matchesEnvironmentFilter -eq $false) { Write-OctopusInformation "The environment filter was provided and the environment $($itemDetails.EnvironmentId) didn't match any environments. Moving onto next task." continue } if ($octopusInformation.HasProjectFilter -eq $true -and $matchesProjectFilter -eq $false) { Write-OctopusInformation "The project filter was provided and the project $($itemDetails.ProjectId) didn't match any projects. Moving onto next task." continue } if ($octopusInformation.HasTenantFilter -eq $true -and $matchesTenantFilter -eq $false) { Write-OctopusInformation "The tenant filter was provided and the tenant $($itemDetails.TenantId) didn't match any tenants. Moving onto next task." continue } $matchingTasks += $task } if ($matchingTasks.Count -eq 0) { Write-OctopusSuccess "No matching tasks found, exiting." exit 0 } Write-OctopusSuccess "Matching tasks found, checking where they are in the queue." $matchingTaskCounter = 0 Write-OctopusInformation "Looping through all the queued tasks again to find which tasks to cancel." foreach ($task in $queuedTasks) { if ((Test-OctopusListHasId -octopusList $matchingTasks -octopusId $task.Id)) { $matchingTaskCounter += 1 Write-OctopusInformation "The task $($task.Id) is one we want to move to the top of queue, leaving as is." if ($matchingTaskCounter -eq $matchingTasks.Count) { Write-OctopusInformation "All the matching tasks we want to move to the top of the queue have been found, exiting" break } continue } $updatedTask = Invoke-OctopusApi -endPoint "tasks/$($task.Id)" -octopusUrl $octopusUrl -spaceId $null -apiKey $octopusApiKey -method "GET" -ignoreCache $true if ($updatedTask.HasBeenPickedUpByProcessor -eq $true) { Write-OctopusInformation "The task $($task.Id) has already been picked up and started processing, moving on." continue } $canceledTaskResult = Invoke-OctopusApi -endPoint "tasks/$($task.Id)/cancel" -octopusUrl $octopusUrl -spaceId $null -apiKey $octopusApiKey -method "POST" -ignoreCache $true Write-OctopusSuccess "Task $($task.Description) has been successfully cancelled" if ($task.Name -eq "Deploy") { Write-OctopusInformation "Task $($task.Id) is a deployment, setting up a redeploy." $deploymentInfo = Get-DeploymentDetailsFromTask -deploymentTask $task -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $bodyRaw = @{ EnvironmentId = $deploymentInfo.EnvironmentId ExcludedMachineIds = $deploymentInfo.ExcludedMachineIds ForcePackageDownload = $deploymentInfo.ForcePackageDownload ForcePackageRedeployment = $deploymentInfo.ForcePackageRedeployment FormValues = $deploymentInfo.FormValues QueueTime = $null QueueTimeExpiry = $null ReleaseId = $deploymentInfo.ReleaseId SkipActions = $deploymentInfo.SkipActions SpecificMachineIds = $deploymentInfo.SpecificMachineIds TenantId = $deploymentInfo.TenantId UseGuidedFailure = $deploymentInfo.UseGuidedFailure } $newDeployment = Invoke-OctopusApi -endPoint "deployments" -spaceId $task.SpaceId -octopusUrl $octopusUrl -apiKey $octopusApiKey -method "POST" -item $bodyRaw Write-OctopusSuccess "$($task.Description) has been successfully resubmitted with the new id $($newDeployment.TaskId)" } if ($task.Name -eq "RunbookRun") { Write-OctopusInformation "Task $($task.Id) is a runbook run, configuring a re-run." $runbookInfo = Get-RunbookRunDetailsFromTask -runbookTask $task -octopusUrl $octopusUrl -octopusApiKey $octopusApiKey $bodyRaw = @{ EnvironmentId = $runbookInfo.EnvironmentId ExcludedMachineIds = $runbookInfo.ExcludedMachineIds ForcePackageDownload = $runbookInfo.ForcePackageDownload ForcePackageRedeployment = $runbookInfo.ForcePackageRedeployment FormValues = $runbookInfo.FormValues QueueTime = $null QueueTimeExpiry = $null RunbookId = $runbookInfo.RunbookId SkipActions = $runbookInfo.SkipActions SpecificMachineIds = $runbookInfo.SpecificMachineIds TenantId = $runbookInfo.TenantId UseGuidedFailure = $runbookInfo.UseGuidedFailure FrozenRunbookProcessId = $runbookInfo.FrozenRunbookProcessId RunbookSnapshotId = $runbookInfo.RunbookSnapshotId } $newDeployment = Invoke-OctopusApi -endPoint "runbookRuns" -spaceId $task.SpaceId -octopusUrl $octopusUrl -apiKey $octopusApiKey -method "POST" -item $bodyRaw Write-OctopusSuccess "$($task.Description) has been successfully resubmitted with the new id $($newDeployment.TaskId)" } } ```
# Run a health check Source: https://octopus.com/docs/octopus-rest-api/examples/tasks/run-healthcheck.md This script demonstrates how to programmatically create and run a [health check](/docs/infrastructure/deployment-targets/machine-policies) task in Octopus Deploy. ## Usage Provide values for: - Octopus URL - Octopus API Key - Description for the health check task - Timeout value (in minutes) for the task - Machine timeout value (in minutes) for the health check to use when run against machines - One of: - An environment name to run the health check task against or - A list of machine names to run the health check task against or - A combination of both environment and machines ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $Description = "Health check started from PowerShell script" $TimeOutAfterMinutes = 5 $MachineTimeoutAfterMinutes = 5 # Choose an Environment, a set of machine names, or both. $EnvironmentName = "Development" $MachineNames = @() # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get EnvironmentId $EnvironmentID = $null if([string]::IsNullOrWhiteSpace($EnvironmentName) -eq $False) { $EnvironmentID += (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/environments/all" -Headers $header) | Where-Object {$_.Name -eq $EnvironmentName} | Select-Object -ExpandProperty Id -First 1 } # Get MachineIds $MachineIds = $null if($MachineNames.Count -gt 0) { $MachineIds = $EnvironmentID += (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/machines/all" -Headers $header) | Where-Object {$_.Name -eq $EnvironmentName} | Select-Object -ExpandProperty Id -Join ", " } # Create json payload $jsonPayload = @{ SpaceId = "$($space.Id)" Name = "Health" Description = $Description Arguments = @{ Timeout = "$([TimeSpan]::FromMinutes($TimeOutAfterMinutes))" MachineTimeout = "$([TimeSpan]::FromMinutes($MachineTimeoutAfterMinutes))" EnvironmentId = $EnvironmentID MachineIds = $MachineIds } } # Create health check task Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/tasks" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers $header ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $Description = "Health check started from PowerShell script" $TimeOutAfterMinutes = 5 $MachineTimeoutAfterMinutes = 5 # Choose an Environment, a set of machine names, or both. $EnvironmentName = "" $MachineNames = @() $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get EnvironmentId $EnvironmentID = $null if([string]::IsNullOrWhiteSpace($EnvironmentName) -eq $False) { $EnvironmentID = $repositoryForSpace.Environments.FindByName($EnvironmentName).Id } # Get MachineIds $MachineIds = $null if($MachineNames.Count -gt 0) { $MachineIds = ($repositoryForSpace.Machines.GetAll() | Where-Object {$MachineNames -contains $_.Name} | Select-Object -ExpandProperty Id) -Join ", " } # Execute health check $repositoryForSpace.Tasks.ExecuteHealthCheck($Description,$TimeOutAfterMinutes,$MachineTimeoutAfterMinutes,$EnvironmentID,$MachineIds) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var description = "Health check started from C# script"; var timeoutAfterMinutes = 5; var machineTimeoutAfterMinutes = 5; var environmentName = "Development"; var machineNames = new List() {"octopus01-listening" }; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get EnvironmentId string environmentId = null; if (string.IsNullOrWhiteSpace(environmentName) == false) { environmentId = repositoryForSpace.Environments.FindByName(environmentName).Id; } // Get MachineIds string[] machineIds = null; if (machineNames.Any()) { machineIds = repositoryForSpace.Machines.FindAll().Where(m => machineNames.Contains(m.Name)).Select(m => m.Id).ToArray(); } repositoryForSpace.Tasks.ExecuteHealthCheck(description, timeoutAfterMinutes, machineTimeoutAfterMinutes, environmentId, machineIds); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items def convert(seconds): seconds = seconds % (24 * 3600) hour = seconds // 3600 seconds %= 3600 minutes = seconds // 60 seconds %= 60 return "%d:%02d:%02d" % (hour, minutes, seconds) octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" description = 'Health check started from Python script' timeout_after_minutes = 5 machine_timeout_after_minutes = 5 environment_name = 'Development' machine_names = [] # blank will check all machines in environment # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get environment uri = '{0}/api/{1}/environments'.format(octopus_server_uri, space['Id']) environments = get_octopus_resource(uri, headers) environment = next((e for e in environments if e['Name'] == environment_name), None) # Get machines to check machines_to_check = [] uri = '{0}/api/{1}/machines?environmentids={2}'.format(octopus_server_uri, space['Id'], environment['Id']) machines = get_octopus_resource(uri, headers) for machine in machines: if len(machine_names) == 0: machines_to_check.append(machine['Id']) else: if machine['Name'] in machine_names: machines_to_check.append(machine['Id']) # Construct payload json_payload = { 'SpaceId': space['Id'], 'Name': 'Health', 'Description': description, 'Arguments': { 'Timeout': convert((timeout_after_minutes * 60)), 'MachineTimeout': convert((machine_timeout_after_minutes * 60)), 'EnvironmentId': environment['Id'], 'MachineIds': machines_to_check } } print (json_payload) uri = '{0}/api/{1}/tasks'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=json_payload) response.raise_for_status() ```
# Create a tenant Source: https://octopus.com/docs/octopus-rest-api/examples/tenants/create-tenant.md This script demonstrates how to programmatically create a new [tenant](/docs/tenants) in Octopus. ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the space to use - Name of the tenant to create - A list of Project names to connect the new tenant with - A list of Environment names to connect the new tenant with - A list of Tenant tags to use with the new tenant :::div{.hint} **Note:** In order for this script to execute correctly, please note the following: - The projects provided must have the Multi-tenanted deployment setting enabled. - The environments provided must exist. - The optional tenant tags provided must exist. ::: ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Provide the space name $spaceName = "Default" # Provide a tenant name $tenantName = "MyTenant" # Provide project names which have multi-tenancy enabled in their settings. $projectNames = @("MyProject") # provide the environments to connect to the projects. $environmentNames = @("Development", "Test") # Optionally, provide existing tenant tag sets you wish to apply. $tenantTags = @("MyTagSet/Beta", "MyTagSet/Stable") # Format: TagSet/Tag # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get projects $projectIds = @() foreach ($projectName in $projectNames) { $projectIds += ((Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName}).Id } # Get Environments $environmentIds = @() $environments = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/environments/all" -Headers $header) | Where-Object {$environmentNames -contains $_.Name} foreach ($environment in $environments) { $environmentIds += $environment.Id } # Build project/environments $projectEnvironments = @{} foreach ($projectId in $projectIds) { $projectEnvironments.Add($projectId, $environmentIds) } # Build json payload $jsonPayload = @{ Name = $tenantName TenantTags = $tenantTags SpaceId = $space.Id ProjectEnvironments = $projectEnvironments } # Create tenant Invoke-RestMethod -Method Post -Uri "$octopusURL/api/$($space.Id)/tenants" -Body ($jsonPayload | ConvertTo-Json -Depth 10) -Headers $header -ContentType "application/json" ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $tenantName = "MyTenant" $projectNames = @("MyProject") $environmentNames = @("Development", "Test") $tenantTags = @("MyTagSet/Beta", "MyTagSet/Stable") # Format: TagSet/Tag $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get environment ids $environments = $repositoryForSpace.Environments.GetAll() | Where-Object {$environmentNames -contains $_.Name} # Get projects $projects = $repositoryForSpace.Projects.GetAll() | Where-Object {$projectNames -contains $_.Name} # Create project environments $projectEnvironments = New-Object Octopus.Client.Model.ReferenceCollection foreach ($environment in $environments) { $projectEnvironments.Add($environment.Id) | Out-Null } # Create new tenant resource $tenant = New-Object Octopus.Client.Model.TenantResource $tenant.Name = $tenantName # Add tenant tags foreach ($tenantTag in $tenantTags) { $tenant.TenantTags.Add($tenantTag) | Out-Null } # Add project environments foreach ($project in $projects) { $tenant.ProjectEnvironments.Add($project.Id, $projectEnvironments) | Out-Null } # Create the tenant $repositoryForSpace.Tenants.Create($tenant) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string tenantName = "MyTenant"; string[] projectNames = { "MyProject" }; string[] environmentNames = { "Development", "Production" }; string[] tenantTags = { "MyTagSet/Beta", "MyTagSet/Stable" }; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get projects var projects = repositoryForSpace.Projects.FindByNames(projectNames); // Get environments var environments = repositoryForSpace.Environments.FindByNames(environmentNames); // Create project environments Octopus.Client.Model.ReferenceCollection projectEnvironments = new ReferenceCollection(); foreach (var environment in environments) { projectEnvironments.Add(environment.Id); } // Create tenant object Octopus.Client.Model.TenantResource tenant = new TenantResource(); tenant.Name = tenantName; foreach (string tenantTag in tenantTags) { tenant.TenantTags.Add(tenantTag); } foreach (var project in projects) { tenant.ProjectEnvironments.Add(project.Id, projectEnvironments); } // Create tenant repositoryForSpace.Tenants.Create(tenant); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' tenant_name = 'MyTenant' project_names = ['MyProject'] environment_names = ['Development', 'Test'] tenant_tags = ['TagSet/Tag'] # Format: TagSet/Tag # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get projects uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) tenantProjects = [] for project_name in project_names: project = next((x for x in projects if x['Name'] == project_name), None) if None != project: tenantProjects.append(project['Id']) else: print ('{0} not found!'.format(project_name)) # Get environments uri = '{0}/api/{1}/environments'.format(octopus_server_uri, space['Id']) environments = get_octopus_resource(uri, headers) tenantEnvironments = [] for environment_name in environment_names: environment = next((x for x in environments if x['Name'] == environment_name), None) if None != environment: tenantEnvironments.append(environment['Id']) # Create project/environment dictionary projectEnvironments = {} for project in tenantProjects: projectEnvironments[project] = tenantEnvironments # Create new Tenant tenant = { 'Name': tenant_name, 'TenantTags': tenant_tags, 'SpaceId': space['Id'], 'ProjectEnvironments': projectEnvironments } uri = '{0}/api/{1}/tenants'.format(octopus_server_uri, space['Id']) response = requests.post(uri, headers=headers, json=tenant) response.raise_for_status ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" tenantName := "MyTenant" environmentNames := []string{"Development", "Test"} projectNames := []string{"MyProject"} tenantTags := []string{"TagSet/Tag"} projectEnvironments := make(map[string][]string) // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Loop through environments for i := 0; i < len(projectNames); i++ { project := GetProject(apiURL, APIKey, space, projectNames[i]) environmentIds := []string{} for j := 0; j < len(environmentNames); j++ { environment := GetEnvironment(apiURL, APIKey, space, environmentNames[j]) environmentIds = append(environmentIds, environment.ID) } projectEnvironments[project.ID] = environmentIds } // Create new tenant tenant := octopusdeploy.NewTenant(tenantName) tenant.SpaceID = space.ID tenant.ProjectEnvironments = projectEnvironments tenant.TenantTags = tenantTags client := octopusAuth(apiURL, APIKey, space.ID) client.Tenants.Add(tenant) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func GetEnvironment(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, environmentName string) *octopusdeploy.Environment { // Get client for space client := octopusAuth(octopusURL, APIKey, space.ID) // Get environment environmentsQuery := octopusdeploy.EnvironmentsQuery { Name: environmentName, } environments, err := client.Environments.Get(environmentsQuery) if err != nil { log.Println(err) } // Loop through results for _, environment := range environments.Items { if environment.Name == environmentName { return environment } } return nil } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.domain.Project; import com.octopus.sdk.domain.Space; import com.octopus.sdk.domain.Tenant; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.tenant.TenantResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Duration; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.Set; import java.util.TreeSet; import com.google.common.collect.Lists; public class CreateTenant { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final List environmentNames = Lists.newArrayList("Development", "Test"); final List projectNames = Lists.newArrayList("MyProject"); final Set tenantTags = Collections.singleton("TagSet/Tag"); final Repository repo = new Repository(client); final Optional space = repo.spaces().getByName("TheSpaceName"); if (!space.isPresent()) { System.out.println("No space named 'TheSpaceName' exists on server"); return; } final Map> projIdToEnvIdsMap = new HashMap<>(); for (final String projName : projectNames) { final Project project = space .get() .projects() .getByName(projName) .orElseThrow(() -> new IllegalArgumentException("No project called " + projName)); final Set envIds = new TreeSet<>(); for (final String envName : environmentNames) { space .get() .environments() .getByName(envName) .ifPresent(env -> envIds.add(env.getProperties().getId())); } projIdToEnvIdsMap.put(project.getProperties().getId(), envIds); } final TenantResource newTenant = new TenantResource("newTenant"); newTenant.setProjectEnvironments(projIdToEnvIdsMap); newTenant.tenantTags(tenantTags); final Tenant createdTenant = space.get().tenants().create(newTenant); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Deactivate tenants Source: https://octopus.com/docs/octopus-rest-api/examples/tenants/deactivate-tenant.md In 2025.1 Octopus has added support for deactivating tenants. Inactive tenants do not allow deployments or runbook runs but are able to be edited. They are also removed from license calculations allowing you to effectively archive unused tenants and re-enable them in the future. Inactive tenants are highlighted with grayed out text and are not available for selection on the deployment or runbook run pages. If deployments are created for inactive tenants via the API or CLI an exception will be thrown. This script demonstrates how to programmatically deactivate a tenant. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the tenant - Boolean value for enabled ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $tenantName = "MyTenant" $tenantEnabled = $true # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get tenant $tenant = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/tenants/all" -Headers $header) | Where-Object {$_.Name -eq $tenantName} # Enable/disable tenant $tenant.IsDisabled = !$tenantEnabled # Update tenant Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/tenants/$($tenant.Id)" -Headers $header -Body ($tenant | ConvertTo-Json -Depth 10) ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "c:\octopus.client\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $tenantName = "MyTenant" $tenantEnabled = $true $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get tenant $tenant = $repositoryForSpace.Tenants.FindByName($tenantName) # Enable/disable tenant $tenant.IsDisabled = !$tenantEnabled # Update tenant $repositoryForSpace.Tenants.Modify($tenant) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; var tenantName = "MyTenant"; bool enabled = false; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get tenant var tenant = repositoryForSpace.Tenants.FindByName(tenantName); // Enable/disable tenant tenant.IsDisabled = !enabled; //update tenant repositoryForSpace.Tenants.Modify(tenant); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' tenant_name = 'Your Tenant Name' disable_tenant = False space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) tenant = get_by_name('{0}/{1}/tenants/all'.format(octopus_server_uri, space['Id']), tenant_name) tenant['IsDisabled'] = disable_tenant uri = '{0}/{1}/tenants/{2}'.format(octopus_server_uri, space['Id'], tenant['Id']) response = requests.put(uri, headers=headers, json=tenant) response.raise_for_status() ```
Go ```go package main import ( "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/v2/pkg/client" "github.com/OctopusDeploy/go-octopusdeploy/v2/pkg/spaces" "github.com/OctopusDeploy/go-octopusdeploy/v2/pkg/tenants" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" tenantName := "MyTenant" enabled := true space := GetSpace(apiURL, APIKey, spaceName) if space == nil { log.Println(err) } client := octopusAuth(apiURL, APIKey, space.ID) tenant := GetTenantByName(client, tenantName) if tenant == nil { log.Println(err) } tenant.IsDisabled = !enabled updatedTenant, err := client.Tenants.Update(tenant) if err != nil { log.Println(err) } log.Printf("Tenant '%s' updated successfully. IsDisabled: %v", updatedTenant.Name, updatedTenant.IsDisabled) } func octopusAuth(octopusURL *url.URL, APIKey string, spaceID string) *client.Client { client, err := client.NewClient(nil, octopusURL, APIKey, spaceID) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *spaces.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := spaces.SpacesQuery{ PartialName: spaceName, } spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetTenantByName(client *client.Client, tenantName string) *tenants.Tenant { tenantQuery := tenants.TenantsQuery{ Name: tenantName, } tenants, err := client.Tenants.Get(tenantQuery) if err != nil { log.Println(err) } if len(tenants.Items) == 1 { return tenants.Items[0] } return nil } ```
# Update tenant variables Source: https://octopus.com/docs/octopus-rest-api/examples/tenants/update-tenant-variable.md These scripts demonstrate how to programmatically update tenant variables. ## Update project tenant variables Provide values for: - Octopus URL - Octopus API Key - Name of the space to use - Name of the tenant - Name of the Project template - The new variable value - Choose whether the new variable value is bound to an Octopus variable value e.g. `#{MyVariable}`
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" # Name of the space $tenantName = "TenantName" # The tenant name $projectVariableTemplateName = "ProjectTemplateName" # Choose the template name $newValue = "NewValue" # Choose a new variable value, assumes same per environment $NewValueIsBoundToOctopusVariable=$False # Choose $True if the $newValue is an Octopus variable e.g. #{SomeValue} # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get Tenant $tenantsSearch = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/tenants?name=$tenantName" -Headers $header) $tenant = $tenantsSearch.Items | Select-Object -First 1 # Get Project Tenant Variables (including missing variables) $projectVariablesUri = "$octopusURL/api/$($space.Id)/tenants/$($tenant.Id)/projectvariables?includeMissingVariables=true" $projectVariables = (Invoke-RestMethod -Method Get -Uri $projectVariablesUri -Headers $header) # Build update payload $updatePayload = @{ Variables = @() } # Loop through project variables foreach ($variable in $projectVariables.Variables) { if ($variable.Template.Name -eq $projectVariableTemplateName) { Write-Host "Found project variable template: $projectVariableTemplateName (Template ID: $($variable.Template.Id), Project ID: $($variable.ProjectId))" # Create new variable entry $variableEntry = @{ ProjectId = $variable.ProjectId TemplateId = $variable.Template.Id Scope = @{ EnvironmentIds = $variable.Scope.EnvironmentIds } } # Handle sensitive values if($variable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $variableEntry.Value = $newValue } else { $variableEntry.Value = @{ HasValue = $true NewValue = $newValue } } Write-Host "Updated sensitive variable for environments: $($variable.Scope.EnvironmentIds -join ', ')" } else { $variableEntry.Value = $newValue Write-Host "Updated variable value to '$newValue' for environments: $($variable.Scope.EnvironmentIds -join ', ')" } $updatePayload.Variables += $variableEntry } else { # Keep existing variables unchanged $updatePayload.Variables += @{ Id = $variable.Id ProjectId = $variable.ProjectId TemplateId = $variable.TemplateId Value = $variable.Value Scope = $variable.Scope } } } # Handle variables that need to be created if ($projectVariables.MissingVariables) { foreach ($missingVariable in $projectVariables.MissingVariables) { if ($missingVariable.Template.Name -eq $projectVariableTemplateName) { Write-Host "Found missing project variable template: $projectVariableTemplateName (Template ID: $($missingVariable.Template.Id), Project ID: $($missingVariable.ProjectId))" # Create new variable entry for missing variable $variableEntry = @{ ProjectId = $missingVariable.ProjectId TemplateId = $missingVariable.Template.Id Scope = @{ EnvironmentIds = $missingVariable.Scope.EnvironmentIds } } # Handle sensitive values if($missingVariable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $variableEntry.Value = $newValue } else { $variableEntry.Value = @{ HasValue = $true NewValue = $newValue } } Write-Host "Created sensitive variable for missing template" } else { $variableEntry.Value = $newValue Write-Host "Created variable value '$newValue' for missing template" } $updatePayload.Variables += $variableEntry } } } # Update project variables Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/tenants/$($tenant.Id)/projectvariables" -Headers $header -Body ($updatePayload | ConvertTo-Json -Depth 10) Write-Host "Successfully updated project tenant variables" ```
PowerShell (Octopus.Client) ```powershell # You can get this dll from your Octopus Server/Tentacle installation directory or from # https://www.nuget.org/packages/Octopus.Client/ Add-Type -Path 'Octopus.Client.dll' # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" # Name of the Space $tenantName = "TenantName" # The tenant name $projectVariableTemplateName = "ProjectTemplateName" # Choose the template Name $newValue = "NewValue" # Choose a new variable value $NewValueIsBoundToOctopusVariable=$False # Choose $True if the $newValue is an Octopus variable e.g. #{SomeValue} $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $spaceRepository = $client.ForSpace($space) # Get Tenant $tenant = $spaceRepository.Tenants.FindByName($tenantName) # Get Project Tenant Variables (including missing variables) $projectVariablesRequest = New-Object Octopus.Client.Model.TenantVariables.GetProjectVariablesByTenantIdRequest($tenant.Id, $space.Id) $projectVariablesRequest.IncludeMissingVariables = $true $projectVariables = $spaceRepository.TenantVariables.Get($projectVariablesRequest) # Build update payload $variablesToModify = @() # Loop through project variables foreach ($variable in $projectVariables.Variables) { if ($variable.Template.Name -eq $projectVariableTemplateName) { Write-Host "Found project variable template: $projectVariableTemplateName (Template ID: $($variable.Template.Id), Project ID: $($variable.ProjectId))" # Handle sensitive values if($variable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource $newPropertyValue.SensitiveValue = @{ HasValue = $true NewValue = $newValue } $newPropertyValue.IsSensitive = $true } Write-Host "Updated sensitive variable for environments: $($variable.Scope.EnvironmentIds -join ', ')" } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) Write-Host "Updated variable value to '$newValue' for environments: $($variable.Scope.EnvironmentIds -join ', ')" } # Create new payload entry $variablePayload = New-Object Octopus.Client.Model.TenantVariables.TenantProjectVariablePayload( $variable.ProjectId, $variable.TemplateId, $newPropertyValue, $variable.Scope ) $variablesToModify += $variablePayload } else { # Keep existing variables unchanged $variablePayload = New-Object Octopus.Client.Model.TenantVariables.TenantProjectVariablePayload( $variable.ProjectId, $variable.TemplateId, $variable.Value, $variable.Scope ) $variablePayload.Id = $variable.Id $variablesToModify += $variablePayload } } # Handle variables that need to be created if ($projectVariables.MissingVariables) { foreach ($missingVariable in $projectVariables.MissingVariables) { if ($missingVariable.Template.Name -eq $projectVariableTemplateName) { Write-Host "Found missing project variable template: $projectVariableTemplateName (Template ID: $($missingVariable.Template.Id), Project ID: $($missingVariable.ProjectId))" # Handle sensitive values if($missingVariable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource $newPropertyValue.SensitiveValue = @{ HasValue = $true NewValue = $newValue } $newPropertyValue.IsSensitive = $true } Write-Host "Created sensitive variable for missing template" } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) Write-Host "Created variable value '$newValue' for missing template" } # Create new payload entry for missing variable $variablePayload = New-Object Octopus.Client.Model.TenantVariables.TenantProjectVariablePayload( $missingVariable.ProjectId, $missingVariable.TemplateId, $newPropertyValue, $missingVariable.Scope ) $variablesToModify += $variablePayload } } } # Update project variables $modifyProjectCommand = New-Object Octopus.Client.Model.TenantVariables.ModifyProjectVariablesByTenantIdCommand($tenant.Id, $space.Id, $variablesToModify) $spaceRepository.TenantVariables.Modify($modifyProjectCommand) | Out-Null Write-Host "Successfully updated project tenant variables" } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using Octopus.Client.Model.TenantVariables; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var tenantName = "TenantName"; var projectVariableTemplateName = "ProjectTemplateName"; var variableNewValue = "NewValue"; var valueBoundToOctoVariable = false; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get Tenant var tenant = repositoryForSpace.Tenants.FindByName(tenantName); // Get Project Tenant Variables (including missing variables) var projectVariablesRequest = new GetProjectVariablesByTenantIdRequest(tenant.Id, space.Id) { IncludeMissingVariables = true }; var projectVariables = repositoryForSpace.TenantVariables.Get(projectVariablesRequest); // Build update payload var variablesToModify = new List(); // Loop through project variables foreach (var variable in projectVariables.Variables) { if (variable.Template.Name == projectVariableTemplateName) { Console.WriteLine($"Found project variable template: {projectVariableTemplateName} (Template ID: {variable.Template.Id}, Project ID: {variable.ProjectId})"); PropertyValueResource newPropertyValue; // Handle sensitive values if (variable.Template.DisplaySettings.ContainsKey("Octopus.ControlType") && variable.Template.DisplaySettings["Octopus.ControlType"] == "Sensitive") { if (NewValueIsBoundToOctopusVariable) { newPropertyValue = new PropertyValueResource(newValue, false); } else { newPropertyValue = new PropertyValueResource(newValue, true); } Console.WriteLine($"Updated sensitive variable for environments: {string.Join(", ", variable.Scope.EnvironmentIds)}"); } else { newPropertyValue = new PropertyValueResource(newValue, false); Console.WriteLine($"Updated variable value to '{newValue}' for environments: {string.Join(", ", variable.Scope.EnvironmentIds)}"); } // Create new payload entry var variablePayload = new TenantProjectVariablePayload( variable.ProjectId, variable.TemplateId, newPropertyValue, variable.Scope ); variablesToModify.Add(variablePayload); } else { // Keep existing variables unchanged var variablePayload = new TenantProjectVariablePayload( variable.ProjectId, variable.TemplateId, variable.Value, variable.Scope ) { Id = variable.Id }; variablesToModify.Add(variablePayload); } } // Handle variables that need to be created if (projectVariables.MissingVariables != null) { foreach (var missingVariable in projectVariables.MissingVariables) { if (missingVariable.Template.Name == projectVariableTemplateName) { Console.WriteLine($"Found missing project variable template: {projectVariableTemplateName} (Template ID: {missingVariable.Template.Id}, Project ID: {missingVariable.ProjectId})"); PropertyValueResource newPropertyValue; // Handle sensitive values if (missingVariable.Template.DisplaySettings.ContainsKey("Octopus.ControlType") && missingVariable.Template.DisplaySettings["Octopus.ControlType"] == "Sensitive") { if (NewValueIsBoundToOctopusVariable) { newPropertyValue = new PropertyValueResource(newValue, false); } else { // For sensitive variables, use the isSensitive parameter newPropertyValue = new PropertyValueResource(newValue, true); } Console.WriteLine("Created sensitive variable for missing template"); } else { newPropertyValue = new PropertyValueResource(newValue, false); Console.WriteLine($"Created variable value '{newValue}' for missing template"); } // Create new payload entry for missing variable var variablePayload = new TenantProjectVariablePayload( missingVariable.ProjectId, missingVariable.TemplateId, newPropertyValue, missingVariable.Scope ); variablesToModify.Add(variablePayload); } } } // Update project variables var modifyProjectCommand = new ModifyProjectVariablesByTenantIdCommand(tenant.Id, space.Id, variablesToModify.ToArray()); repositoryForSpace.TenantVariables.Modify(modifyProjectCommand); Console.WriteLine("Successfully updated project tenant variables"); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" tenant_name = "MyTenant" project_variable_template_name = "ProjectTemplateName" new_value = "MyValue" new_value_bound_to_octopus_variable = False # Get space uri = f'{octopus_server_uri}/api/spaces' spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get Tenant uri = '{0}/api/{1}/tenants'.format(octopus_server_uri, space['Id']) tenants = get_octopus_resource(uri, headers) tenant = next((t for t in tenants if t['Name'] == tenant_name), None) # Get Project Tenant Variables (including missing variables) uri = '{0}/api/{1}/tenants/{2}/projectvariables?includeMissingVariables=true'.format(octopus_server_uri, space['Id'], tenant['Id']) project_variables = requests.get(uri, headers=headers).json() update_payload = { 'Variables': [] } # Loop through project variables for variable in project_variables['Variables']: if variable['Template']['Name'] == project_variable_template_name: print(f"Found project variable template: {project_variable_template_name} (Template ID: {variable['Template']['Id']}, Project ID: {variable['ProjectId']})") # Create new variable entry variable_entry = { 'ProjectId': variable['ProjectId'], 'TemplateId': variable['Template']['Id'], 'Scope': { 'EnvironmentIds': variable['Scope']['EnvironmentIds'] } } # Handle sensitive values if variable['Template']['DisplaySettings'].get('Octopus.ControlType') == 'Sensitive': if new_value_is_bound_to_octopus_variable: variable_entry['Value'] = new_value else: variable_entry['Value'] = { 'HasValue': True, 'NewValue': new_value } print(f"Updated sensitive variable for environments: {', '.join(variable['Scope']['EnvironmentIds'])}") else: variable_entry['Value'] = new_value print(f"Updated variable value to '{new_value}' for environments: {', '.join(variable['Scope']['EnvironmentIds'])}") update_payload['Variables'].append(variable_entry) else: # Keep existing variables unchanged update_payload['Variables'].append({ 'Id': variable['Id'], 'ProjectId': variable['ProjectId'], 'TemplateId': variable['TemplateId'], 'Value': variable['Value'], 'Scope': variable['Scope'] }) # Handle variables that need to be created if 'MissingVariables' in project_variables and project_variables['MissingVariables']: for missing_variable in project_variables['MissingVariables']: if missing_variable['Template']['Name'] == project_variable_template_name: print(f"Found missing project variable template: {project_variable_template_name} (Template ID: {missing_variable['Template']['Id']}, Project ID: {missing_variable['ProjectId']})") # Create new variable entry for missing variable variable_entry = { 'ProjectId': missing_variable['ProjectId'], 'TemplateId': missing_variable['Template']['Id'], 'Scope': { 'EnvironmentIds': missing_variable['Scope']['EnvironmentIds'] } } # Handle sensitive values if missing_variable['Template']['DisplaySettings'].get('Octopus.ControlType') == 'Sensitive': if new_value_is_bound_to_octopus_variable: variable_entry['Value'] = new_value else: variable_entry['Value'] = { 'HasValue': True, 'NewValue': new_value } print("Created sensitive variable for missing template") else: variable_entry['Value'] = new_value print(f"Created variable value '{new_value}' for missing template") update_payload['Variables'].append(variable_entry) # Update project variables response = requests.put(f'{octopus_server_uri}/api/{space["Id"]}/tenants/{tenant["Id"]}/projectvariables', headers=headers, json=update_payload) response.raise_for_status() print("Successfully updated project tenant variables") ```
## Update common tenant variables Provide values for: - Octopus URL - Octopus API Key - Name of the space to use - Name of the tenant - Name of the Library template - The new variable value - Choose whether the new variable value is bound to an Octopus variable value e.g. `#{MyVariable}`
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" # Name of the space $tenantName = "TenantName" # The tenant name $commonVariableTemplateName = "CommonTemplateName" # Choose the template name $newValue = "NewValue" # Choose a new variable value, assumes same per environment $NewValueIsBoundToOctopusVariable=$False # Choose $True if the $newValue is an Octopus variable e.g. #{SomeValue} # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get Tenant $tenantsSearch = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/tenants?name=$tenantName" -Headers $header) $tenant = $tenantsSearch.Items | Select-Object -First 1 # Get Common Tenant Variables (including missing variables) $commonVariablesUri = "$octopusURL/api/$($space.Id)/tenants/$($tenant.Id)/commonvariables?includeMissingVariables=true" $commonVariables = (Invoke-RestMethod -Method Get -Uri $commonVariablesUri -Headers $header) # Build update payload $updatePayload = @{ Variables = @() } # Loop through common variables foreach ($variable in $commonVariables.Variables) { if ($variable.Template.Name -eq $commonVariableTemplateName) { Write-Host "Found common variable template: $commonVariableTemplateName (Template ID: $($variable.Template.Id), Library Variable Set ID: $($variable.LibraryVariableSetId))" # Create new variable entry $variableEntry = @{ LibraryVariableSetId = $variable.LibraryVariableSetId TemplateId = $variable.Template.Id Scope = @{ EnvironmentIds = $variable.Scope.EnvironmentIds } } # Handle sensitive values if($variable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $variableEntry.Value = $newValue } else { $variableEntry.Value = @{ HasValue = $true NewValue = $newValue } } Write-Host "Updated sensitive variable for environments: $($variable.Scope.EnvironmentIds -join ', ')" } else { $variableEntry.Value = $newValue Write-Host "Updated variable value to '$newValue' for environments: $($variable.Scope.EnvironmentIds -join ', ')" } $updatePayload.Variables += $variableEntry } else { # Keep existing variables unchanged $updatePayload.Variables += @{ Id = $variable.Id LibraryVariableSetId = $variable.LibraryVariableSetId TemplateId = $variable.TemplateId Value = $variable.Value Scope = $variable.Scope } } } # Handle variables that need to be created if ($commonVariables.MissingVariables) { foreach ($missingVariable in $commonVariables.MissingVariables) { if ($missingVariable.Template.Name -eq $commonVariableTemplateName) { Write-Host "Found missing common variable template: $commonVariableTemplateName (Template ID: $($missingVariable.Template.Id), Library Variable Set ID: $($missingVariable.LibraryVariableSetId))" # Create new variable entry for missing variable $variableEntry = @{ LibraryVariableSetId = $missingVariable.LibraryVariableSetId TemplateId = $missingVariable.Template.Id Scope = @{ EnvironmentIds = $missingVariable.Scope.EnvironmentIds } } # Handle sensitive values if($missingVariable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $variableEntry.Value = $newValue } else { $variableEntry.Value = @{ HasValue = $true NewValue = $newValue } } Write-Host "Created sensitive variable for missing template" } else { $variableEntry.Value = $newValue Write-Host "Created variable value '$newValue' for missing template" } $updatePayload.Variables += $variableEntry } } } # Update common variables Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/tenants/$($tenant.Id)/commonvariables" -Headers $header -Body ($updatePayload | ConvertTo-Json -Depth 10) Write-Host "Successfully updated common tenant variables" ```
PowerShell (Octopus.Client) ```powershell # You can get this dll from your Octopus Server/Tentacle installation directory or from # https://www.nuget.org/packages/Octopus.Client/ Add-Type -Path 'Octopus.Client.dll' # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" # Name of the Space $tenantName = "TenantName" # The tenant name $commonVariableTemplateName = "CommonTemplateName" # Choose the template Name $newValue = "NewValue" # Choose a new variable value $NewValueIsBoundToOctopusVariable=$False # Choose $True if the $newValue is an Octopus variable e.g. #{SomeValue} $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $spaceRepository = $client.ForSpace($space) # Get Tenant $tenant = $spaceRepository.Tenants.FindByName($tenantName) # Get Common Tenant Variables (including missing variables) $commonVariablesRequest = New-Object Octopus.Client.Model.TenantVariables.GetCommonVariablesByTenantIdRequest($tenant.Id, $space.Id) $commonVariablesRequest.IncludeMissingVariables = $true $commonVariables = $spaceRepository.TenantVariables.Get($commonVariablesRequest) # Build update payload $variablesToModify = @() # Loop through common variables foreach ($variable in $commonVariables.Variables) { if ($variable.Template.Name -eq $commonVariableTemplateName) { Write-Host "Found common variable template: $commonVariableTemplateName (Template ID: $($variable.Template.Id), Library Variable Set ID: $($variable.LibraryVariableSetId))" # Handle sensitive values if($variable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $true) } Write-Host "Updated sensitive variable for environments: $($variable.Scope.EnvironmentIds -join ', ')" } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) Write-Host "Updated variable value to '$newValue' for environments: $($variable.Scope.EnvironmentIds -join ', ')" } # Create new payload entry $variablePayload = New-Object Octopus.Client.Model.TenantVariables.TenantCommonVariablePayload( $variable.LibraryVariableSetId, $variable.TemplateId, $newPropertyValue, $variable.Scope ) $variablesToModify += $variablePayload } else { # Keep existing variables unchanged $variablePayload = New-Object Octopus.Client.Model.TenantVariables.TenantCommonVariablePayload( $variable.LibraryVariableSetId, $variable.TemplateId, $variable.Value, $variable.Scope ) $variablePayload.Id = $variable.Id $variablesToModify += $variablePayload } } # Handle variables that need to be created if ($commonVariables.MissingVariables) { foreach ($missingVariable in $commonVariables.MissingVariables) { if ($missingVariable.Template.Name -eq $commonVariableTemplateName) { Write-Host "Found missing common variable template: $commonVariableTemplateName (Template ID: $($missingVariable.Template.Id), Library Variable Set ID: $($missingVariable.LibraryVariableSetId))" # Handle sensitive values if($missingVariable.Template.DisplaySettings["Octopus.ControlType"] -eq "Sensitive") { if($NewValueIsBoundToOctopusVariable -eq $True) { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $true) } Write-Host "Created sensitive variable for missing template" } else { $newPropertyValue = New-Object Octopus.Client.Model.PropertyValueResource($newValue, $false) Write-Host "Created variable value '$newValue' for missing template" } # Create new payload entry for missing variable $variablePayload = New-Object Octopus.Client.Model.TenantVariables.TenantCommonVariablePayload( $missingVariable.LibraryVariableSetId, $missingVariable.TemplateId, $newPropertyValue, $missingVariable.Scope ) $variablesToModify += $variablePayload } } } # Update common variables $modifyCommonCommand = New-Object Octopus.Client.Model.TenantVariables.ModifyCommonVariablesByTenantIdCommand($tenant.Id, $space.Id, $variablesToModify) $spaceRepository.TenantVariables.Modify($modifyCommonCommand) | Out-Null Write-Host "Successfully updated common tenant variables" } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using Octopus.Client.Model.TenantVariables; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; var tenantName = "TenantName"; var commonVariableTemplateName = "CommonTemplateName"; var newValue = "NewValue"; var newValueIsBoundToOctopusVariable = false; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get Tenant var tenant = repositoryForSpace.Tenants.FindByName(tenantName); // Get Common Tenant Variables (including missing variables) var commonVariablesRequest = new GetCommonVariablesByTenantIdRequest(tenant.Id, space.Id) { IncludeMissingVariables = true }; var commonVariables = repositoryForSpace.TenantVariables.Get(commonVariablesRequest); // Build update payload var variablesToModify = new List(); // Loop through common variables foreach (var variable in commonVariables.Variables) { if (variable.Template.Name == commonVariableTemplateName) { Console.WriteLine($"Found common variable template: {commonVariableTemplateName} (Template ID: {variable.Template.Id}, Library Variable Set ID: {variable.LibraryVariableSetId})"); PropertyValueResource newPropertyValue; // Handle sensitive values if (variable.Template.DisplaySettings.ContainsKey("Octopus.ControlType") && variable.Template.DisplaySettings["Octopus.ControlType"] == "Sensitive") { if (newValueIsBoundToOctopusVariable) { newPropertyValue = new PropertyValueResource(newValue, false); } else { newPropertyValue = new PropertyValueResource(newValue, true); } Console.WriteLine($"Updated sensitive variable for environments: {string.Join(", ", variable.Scope.EnvironmentIds)}"); } else { newPropertyValue = new PropertyValueResource(newValue, false); Console.WriteLine($"Updated variable value to '{newValue}' for environments: {string.Join(", ", variable.Scope.EnvironmentIds)}"); } // Create new payload entry var variablePayload = new TenantCommonVariablePayload( variable.LibraryVariableSetId, variable.TemplateId, newPropertyValue, variable.Scope ); variablesToModify.Add(variablePayload); } else { // Keep existing variables unchanged var variablePayload = new TenantCommonVariablePayload( variable.LibraryVariableSetId, variable.TemplateId, variable.Value, variable.Scope ) { Id = variable.Id }; variablesToModify.Add(variablePayload); } } // Handle variables that need to be created if (commonVariables.MissingVariables != null) { foreach (var missingVariable in commonVariables.MissingVariables) { if (missingVariable.Template.Name == commonVariableTemplateName) { Console.WriteLine($"Found missing common variable template: {commonVariableTemplateName} (Template ID: {missingVariable.Template.Id}, Library Variable Set ID: {missingVariable.LibraryVariableSetId})"); PropertyValueResource newPropertyValue; // Handle sensitive values if (missingVariable.Template.DisplaySettings.ContainsKey("Octopus.ControlType") && missingVariable.Template.DisplaySettings["Octopus.ControlType"] == "Sensitive") { if (newValueIsBoundToOctopusVariable) { newPropertyValue = new PropertyValueResource(newValue, false); } else { newPropertyValue = new PropertyValueResource(newValue, true); } Console.WriteLine("Created sensitive variable for missing template"); } else { newPropertyValue = new PropertyValueResource(newValue, false); Console.WriteLine($"Created variable value '{newValue}' for missing template"); } // Create new payload entry for missing variable var variablePayload = new TenantCommonVariablePayload( missingVariable.LibraryVariableSetId, missingVariable.TemplateId, newPropertyValue, missingVariable.Scope ); variablesToModify.Add(variablePayload); } } } // Update common variables var modifyCommonCommand = new ModifyCommonVariablesByTenantIdCommand(tenant.Id, space.Id, variablesToModify.ToArray()); repositoryForSpace.TenantVariables.Modify(modifyCommonCommand); Console.WriteLine("Successfully updated common tenant variables"); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" tenant_name = "MyTenant" common_variable_template_name = "CommonTemplateName" new_value = "MyValue" new_value_bound_to_octopus_variable = False # Get space uri = f'{octopus_server_uri}/api/spaces' spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get Tenant uri = '{0}/api/{1}/tenants'.format(octopus_server_uri, space['Id']) tenants = get_octopus_resource(uri, headers) tenant = next((t for t in tenants if t['Name'] == tenant_name), None) # Get Common Tenant Variables (including missing variables) uri = '{0}/api/{1}/tenants/{2}/commonvariables?includeMissingVariables=true'.format(octopus_server_uri, space['Id'], tenant['Id']) common_variables = requests.get(uri, headers=headers).json() update_payload = { 'Variables': [] } # Loop through common variables for variable in common_variables['Variables']: if variable['Template']['Name'] == common_variable_template_name: print(f"Found common variable template: {common_variable_template_name} (Template ID: {variable['Template']['Id']}, Library Variable Set ID: {variable['LibraryVariableSetId']})") # Create new variable entry variable_entry = { 'LibraryVariableSetId': variable['LibraryVariableSetId'], 'TemplateId': variable['Template']['Id'], 'Scope': { 'EnvironmentIds': variable['Scope']['EnvironmentIds'] } } # Handle sensitive values if variable['Template']['DisplaySettings'].get('Octopus.ControlType') == 'Sensitive': if new_value_bound_to_octopus_variable: variable_entry['Value'] = new_value else: variable_entry['Value'] = { 'HasValue': True, 'NewValue': new_value } print(f"Updated sensitive variable for environments: {', '.join(variable['Scope']['EnvironmentIds'])}") else: variable_entry['Value'] = new_value print(f"Updated variable value to '{new_value}' for environments: {', '.join(variable['Scope']['EnvironmentIds'])}") update_payload['Variables'].append(variable_entry) else: # Keep existing variables unchanged update_payload['Variables'].append({ 'Id': variable['Id'], 'LibraryVariableSetId': variable['LibraryVariableSetId'], 'TemplateId': variable['TemplateId'], 'Value': variable['Value'], 'Scope': variable['Scope'] }) # Handle variables that need to be created if 'MissingVariables' in common_variables and common_variables['MissingVariables']: for missing_variable in common_variables['MissingVariables']: if missing_variable['Template']['Name'] == common_variable_template_name: print(f"Found missing common variable template: {common_variable_template_name} (Template ID: {missing_variable['Template']['Id']}, Library Variable Set ID: {missing_variable['LibraryVariableSetId']})") # Create new variable entry for missing variable variable_entry = { 'LibraryVariableSetId': missing_variable['LibraryVariableSetId'], 'TemplateId': missing_variable['Template']['Id'], 'Scope': { 'EnvironmentIds': missing_variable['Scope']['EnvironmentIds'] } } # Handle sensitive values if missing_variable['Template']['DisplaySettings'].get('Octopus.ControlType') == 'Sensitive': if new_value_bound_to_octopus_variable: variable_entry['Value'] = new_value else: variable_entry['Value'] = { 'HasValue': True, 'NewValue': new_value } print("Created sensitive variable for missing template") else: variable_entry['Value'] = new_value print(f"Created variable value '{new_value}' for missing template") update_payload['Variables'].append(variable_entry) # Update common variables response = requests.put(f'{octopus_server_uri}/api/{space["Id"]}/tenants/{tenant["Id"]}/commonvariables', headers=headers, json=update_payload) response.raise_for_status() print("Successfully updated common tenant variables") ```
# Add Microsoft Entra ID login to users Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/add-azure-ad-to-users.md Octopus supports a number of external [authentication providers](/docs/security/authentication/), including [Microsoft Entra ID Authentication](/docs/security/authentication/azure-ad-authentication). If you want to use Microsoft Entra ID to authenticate but re-use existing Octopus user accounts, the easiest way is to add an Azure AD login: :::figure ![Add an Microsoft Entra ID login to an Octopus user](/docs/img/octopus-rest-api/examples/users-and-teams/images/add-azure-ad-login.png) ::: This script will add Microsoft Entra ID login details to Octopus user accounts. ## Usage Provide values for: - Octopus URL - Octopus API Key - A list of users, supplied from either: - The path to a CSV file containing user records - The Octopus Username, Azure email address and (optionally) Azure display name - (Optional) whether or not to update the Octopus user's email address - (Optional) whether or not to update the Octopus user's display name - (Optional) whether or not to continue to the next user if an error occurs - (Optional) whether or not to force an update of the Azure AD identity if one already exists - (Optional) whether or not to perform a dry run (What If?) and not perform any updates - (Optional) whether or not to toggle debug (Verbose) logging ### Add Microsoft Entra ID identities to single user ```powershell PowerShell (REST API) AddAzureADLogins -OctopusURL "https://your-octopus-url/" -OctopusAPIKey "API-YOUR-KEY" -OctopusUsername "OctoUser" -AzureEmailAddress "octouser@exampledomain.com" -AzureDisplayName "Octo User" -ContinueOnError $False -Force $False -WhatIf $False -DebugLogging $False ``` ### Add Microsoft Entra ID identities for multiple users from CSV file ```powershell PowerShell (REST API) AddAzureADLogins -OctopusURL "https://your-octopus-url/" -OctopusAPIKey "API-YOUR-KEY" -Path "/path/to/user_azure_ad_logins.csv" -ContinueOnError $False -Force $False -WhatIf $False -DebugLogging $False ``` ### Example CSV file An example of the expected CSV file format is shown below: ``` OctopusUsername, AzureEmailAddress, AzureDisplayName OctoUser, octouser@exampledomain.com, Octo User ``` The first row should be the header row containing the following columns: - `OctopusUsername` - `AzureEmailAddress` - `AzureDisplayName` ### Script
PowerShell (REST API) ```powershell function AddAzureADLogins( [Parameter(Mandatory=$True)] [String]$OctopusURL, [Parameter(Mandatory=$True)] [String]$OctopusAPIKey, [String]$Path, [String]$OctopusUsername, [String]$AzureEmailAddress, [String]$AzureDisplayName = $null, [Boolean]$UpdateOctopusEmailAddress = $False, [Boolean]$UpdateOctopusDisplayName = $False, [Boolean]$ContinueOnError = $False, [Boolean]$Force = $False, [Boolean]$WhatIf = $True, [Boolean]$DebugLogging = $False ) { Write-Host "OctopusURL: $OctopusURL" Write-Host "OctopusAPIKey: ********" Write-Host "Path: $Path" Write-Host "OctopusUsername: $OctopusUsername" Write-Host "AzureEmailAddress: $AzureEmailAddress" Write-Host "AzureDisplayName: $AzureDisplayName" Write-Host "UpdateOctopusEmailAddress: $UpdateOctopusEmailAddress" Write-Host "UpdateOctopusDisplayName: $UpdateOctopusDisplayName" Write-Host "ContinueOnError: $ContinueOnError" Write-Host "Force: $Force" Write-Host "WhatIf: $WhatIf" Write-Host "DebugLogging: $DebugLogging" Write-Host $("=" * 60) Write-Host if (-not [string]::IsNullOrWhiteSpace($OctopusURL)) { $OctopusURL = $OctopusURL.TrimEnd('/') } if($DebugLogging -eq $True) { $DebugPreference = "Continue" } $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $usersToUpdate = @() $recordsUpdated = 0 # Validate we have minimum required details. if ([string]::IsNullOrWhiteSpace($Path) -eq $true) { if([string]::IsNullOrWhiteSpace($OctopusUsername) -eq $true -or [string]::IsNullOrWhiteSpace($AzureEmailAddress) -eq $true) { Write-Warning "Path not supplied. OctopusUsername or AzureEmailAddress are either null, or an empty string." return } $usersToUpdate += [PSCustomObject]@{ OctopusUsername = $OctopusUsername AzureEmailAddress = $AzureEmailAddress AzureDisplayName = $AzureDisplayName } } else { # Validate path if(-not (Test-Path $Path)) { Write-Warning "Path '$Path' not found. Does a file exist at that location?" return } $usersToUpdate = Import-Csv -Path $Path -Delimiter "," } # Check if we have any users. If we do, get existing octopus users if($usersToUpdate.Count -gt 0) { Write-Host "Users to update: $($usersToUpdate.Count)" $ExistingOctopusUsers = @() $response = $null do { $uri = if ($response) { $octopusURL + $response.Links.'Page.Next' } else { "$OctopusURL/api/users" } $response = Invoke-RestMethod -Method Get -Uri $uri -Headers $header $ExistingOctopusUsers += $response.Items } while ($response.Links.'Page.Next') Write-Debug "Found $($ExistingOctopusUsers.Count) existing Octopus users" } else { Write-Host "No users to update, exiting." return } if($ExistingOctopusUsers.Count -le 0) { Write-Warning "No users found in Octopus, exiting." return } foreach($user in $usersToUpdate) { Write-Host "Working on user $($User.OctopusUsername)" try { $existingOctopusUser = $ExistingOctopusUsers | Where-Object {$_.Username -eq $user.OctopusUsername} | Select-Object -First 1 if($null -ne $ExistingOctopusUser) { Write-Debug "Found matching octopus user for $($user.OctopusUsername)" # Check if its a service account if($user.IsService -eq $True) { Write-Debug "User $($user.OctopusUsername) is a Service account. This user won't be updated..." continue } # Check if its an active account if($user.IsActive -eq $False) { Write-Debug "User $($user.OctopusUsername) is an inactive account. This user won't be updated..." continue } # Check for existing Microsoft Entra ID Identity first. $azureAdIdentity = $existingOctopusUser.Identities | Where-Object {$_.IdentityProviderName -eq "Azure AD"} | Select-Object -First 1 if($null -ne $azureAdIdentity) { Write-Debug "Found existing Microsoft Entra ID login for user $($user.OctopusUsername)" if($Force -eq $True) { Write-Debug "Force set to true. Replacing existing Microsoft Entra ID Claims for Display Name and Email for user $($user.OctopusUsername)" $azureAdIdentity.Claims.email.Value = $User.AzureEmailAddress $azureAdIdentity.Claims.dn.Value = $User.AzureDisplayName } else { Write-Debug "Force set to false. Skipping replacing existing Microsoft Entra ID Claims for Display Name and Email for user $($user.OctopusUsername)" } } else { Write-Debug "No existing Microsoft Entra ID login found for user $($user.OctopusUsername), creating new" $newAzureADIdentity = @{ IdentityProviderName = "Azure AD" Claims = @{ email = @{ Value = $User.AzureEmailAddress IsIdentifyingClaim = $True } dn = @{ Value = $User.AzureDisplayName IsIdentifyingClaim = $False } } } $existingOctopusUser.Identities += $newAzureADIdentity } # Update user's email address if set AND the value isn't empty. if($UpdateOctopusEmailAddress -eq $True -and -not([string]::IsNullOrWhiteSpace($User.AzureEmailAddress) -eq $true)) { Write-Debug "Setting Octopus email address to: $($User.AzureEmailAddress)" $existingOctopusUser.EmailAddress = $User.AzureEmailAddress } # Update user's display name if set AND the value isn't empty. if($UpdateOctopusDisplayName -eq $True -and -not([string]::IsNullOrWhiteSpace($User.AzureDisplayName) -eq $true)) { Write-Debug "Setting Octopus display name to: $($User.AzureDisplayName)" $existingOctopusUser.DisplayName = $User.AzureDisplayName } $userJsonPayload = $($existingOctopusUser | ConvertTo-Json -Depth 10) if($WhatIf -eq $True) { Write-Host "What If set to true, skipping update for user $($User.OctopusUsername). For details of the payload, set DebugLogging to True" Write-Debug "Would have done a POST to $OctopusUrl/api/users/$($existingOctopusUser.Id) with body:" Write-Debug $userJsonPayload } else { Write-Host "Updating the user $($User.OctopusUsername) in Octopus Deploy" Invoke-RestMethod -Method PUT -Uri "$OctopusUrl/api/users/$($existingOctopusUser.Id)" -Headers $header -Body $userJsonPayload | Out-Null $recordsUpdated += 1 } } else { Write-Warning "No match found for an existing octopus user with Username: $($User.OctopusUsername)" } } catch { If($ContinueOnError -eq $true) { Write-Warning "Error encountered updating $($User.OctopusUsername): $($_.Exception.Message), continuing..." continue } else { throw } } } Write-Host "Updated $($recordsUpdated) user records." } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' function AddAzureLogins ( [Parameter(Mandatory=$True)] [String]$OctopusURL, [Parameter(Mandatory=$True)] [String]$OctopusAPIKey, [String]$Path, [String]$OctopusUsername, [String]$AzureEmailAddress, [String]$AzureDisplayName = $null, [Boolean]$UpdateOctopusEmailAddress = $False, [Boolean]$UpdateOctopusDisplayName = $False, [Boolean]$ContinueOnError = $False, [Boolean]$Force = $False, [Boolean]$WhatIf = $True, [Boolean]$DebugLogging = $False ) { Write-Host "OctopusURL: $OctopusURL" Write-Host "OctopusAPIKey: ********" Write-Host "Path: $Path" Write-Host "OctopusUsername: $OctopusUsername" Write-Host "AzureEmailAddress: $AzureEmailAddress" Write-Host "AzureDisplayName: $AzureDisplayName" Write-Host "UpdateOctopusEmailAddress: $UpdateOctopusEmailAddress" Write-Host "UpdateOctopusDisplayName: $UpdateOctopusDisplayName" Write-Host "ContinueOnError: $ContinueOnError" Write-Host "Force: $Force" Write-Host "WhatIf: $WhatIf" Write-Host "DebugLogging: $DebugLogging" Write-Host $("=" * 60) Write-Host if (-not [string]::IsNullOrWhiteSpace($OctopusURL)) { $OctopusURL = $OctopusURL.TrimEnd('/') } if($DebugLogging -eq $True) { $DebugPreference = "Continue" } $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($OctopusURL, $OctopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) $usersToUpdate = @() $recordsUpdated = 0 # Validate we have minimum required details. if ([string]::IsNullOrWhiteSpace($Path) -eq $true) { if([string]::IsNullOrWhiteSpace($OctopusUsername) -eq $true -or [string]::IsNullOrWhiteSpace($AzureEmailAddress) -eq $true) { Write-Warning "Path not supplied. OctopusUsername or AzureEmailAddress are either null, or an empty string." return } $usersToUpdate += [PSCustomObject]@{ OctopusUsername = $OctopusUsername AzureEmailAddress = $AzureEmailAddress AzureDisplayName = $AzureDisplayName } } else { # Validate path if(-not (Test-Path $Path)) { Write-Warning "Path '$Path' not found. Does a file exist at that location?" return } $usersToUpdate = Import-Csv -Path $Path -Delimiter "," } # Check if we have any users. If we do, get existing octopus users if($usersToUpdate.Count -gt 0) { Write-Host "Users to update: $($usersToUpdate.Count)" $ExistingOctopusUsers = @() # Loop through users foreach ($user in $usersToUpdate) { # Retrieve user account from Octopus Write-Host "Searching Octopus users for $($user.OctopusUsername) ..." $existingOctopusUser = $client.Repository.Users.FindByUsername($user.OctopusUsername) # Check for null if ($null -ne $existingOctopusUser) { # Check user types if ($existingOctopusUser.IsService) { # This is a service account and will not be updated Write-Warning "$($user.OctopusUsername) is a service account, skipping ..." continue } if ($existingOctopusUser.IsActive -eq $False) { # Inactive user skipping Write-Warning "$($user.OctopusUsername) is an inactive account, skipping ..." continue } # Check to see if there's already an Microsoft Entra ID identity $azureAdIdentity = $existingOctopusUser.Identities | Where-Object {$_.IdentityProviderName -eq "Azure AD"} if($null -ne $azureAdIdentity) { Write-Debug "Found existing Microsoft Entra ID login for user $($user.OctopusUsername)" if($Force -eq $True) { Write-Debug "Force set to true. Replacing existing Microsoft Entra ID Claims for Display Name and Email for user $($user.OctopusUsername)" $azureAdIdentity.Claims.email.Value = $User.AzureEmailAddress $azureAdIdentity.Claims.dn.Value = $User.AzureDisplayName } else { Write-Warning "Force set to false. Skipping replacing existing Microsoft Entra ID Claims for Display Name and Email for user $($user.OctopusUsername)" } } else { Write-Debug "No existing Microsoft Entra ID login found for user $($user.OctopusUsername), creating new" $newAzureADIdentity = New-Object Octopus.Client.Model.IdentityResource $newAzureADIdentity.IdentityProviderName = "Azure AD" $newEmailClaim = New-Object Octopus.Client.Model.IdentityClaimResource $newEmailClaim.IsIdentifyingClaim = $True $newEmailClaim.Value = $user.AzureEmailAddress $newAzureADIdentity.Claims.Add("email", $newEmailClaim) # Claims is a Dictionary object $newDisplayClaim = New-Object Octopus.Client.Model.IdentityClaimResource $newDisplayClaim.IsIdentifyingClaim = $False $newDisplayClaim.Value = $user.AzureDisplayName $newAzureADIdentity.Claims.Add("dn", $newDisplayClaim) $existingOctopusUser.Identities += $newAzureADIdentity # Identities is an array } # Update user's email address if set AND the value isn't empty. if($UpdateOctopusEmailAddress -eq $True -and -not([string]::IsNullOrWhiteSpace($User.AzureEmailAddress) -eq $true)) { Write-Debug "Setting Octopus email address to: $($User.AzureEmailAddress)" $existingOctopusUser.EmailAddress = $User.AzureEmailAddress } # Update user's display name if set AND the value isn't empty. if($UpdateOctopusDisplayName -eq $True -and -not([string]::IsNullOrWhiteSpace($User.AzureDisplayName) -eq $true)) { Write-Debug "Setting Octopus display name to: $($User.AzureDisplayName)" $existingOctopusUser.DisplayName = $User.AzureDisplayName } if($WhatIf -eq $True) { Write-Host "What If set to true, skipping update for user $($User.OctopusUsername). For details of the payload, set DebugLogging to True" Write-Debug "Would have done a POST to $OctopusUrl/api/users/$($existingOctopusUser.Id) with body:" Write-Debug $userJsonPayload } else { Write-Host "Updating the user $($User.OctopusUsername) in Octopus Deploy" $client.Repository.Users.Modify($existingOctopusUser) $recordsUpdated += 1 } } else { # User not found Write-Warning "$($user.OctopusUsername) not found!" } } Write-Debug "Found $($ExistingOctopusUsers.Count) existing Octopus users" } else { Write-Host "No users to update, exiting." return } } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; public class UserToUpdate { public string OctopusUserName { get; set; } public string AzureEmailAddress { get; set; } public string AzureDisplayName { get; set; } } public static void AddAzureLogins(string OctopusUrl, string ApiKey, string Path = "", string OctopusUserName = "", string AzureEmailAddress = "", string AzureDisplayName = "", bool UpdateOctopusEmail = false, bool UpdateOctopusDisplayName = false, bool Force = false, bool WhatIf = false) { // Display passed in information Console.WriteLine(string.Format("OctopusURL: {0}", OctopusUrl)); Console.WriteLine("OctopusAPIKey: ****"); Console.WriteLine(string.Format("OctopusUsername: {0}", OctopusUserName)); Console.WriteLine(string.Format("AzureEmailAddress: {0}", AzureEmailAddress)); Console.WriteLine(string.Format("AzureDisplayName: {0}", AzureDisplayName)); Console.WriteLine(string.Format("UpdateOctopusEmailAddress: {0}", UpdateOctopusEmail.ToString())); Console.WriteLine(string.Format("UpdateOctopusDisplayName: {0}", UpdateOctopusDisplayName.ToString())); Console.WriteLine(string.Format("Force: {0}", Force.ToString())); Console.WriteLine(string.Format("WhatIf: {0}", WhatIf.ToString())); // Check to see url is empty if (!string.IsNullOrWhiteSpace(OctopusUrl)) { // Remove trailing / OctopusUrl = OctopusUrl.TrimEnd('/'); } // Create Octopus.Client objects var endpoint = new Octopus.Client.OctopusServerEndpoint(OctopusUrl, ApiKey); var repository = new Octopus.Client.OctopusRepository(endpoint); var client = new Octopus.Client.OctopusClient(endpoint); // Declare collection of users to update var usersToUpdate = new System.Collections.Generic.List(); // Test to see if path was provided if (string.IsNullOrWhiteSpace(Path)) { if (!string.IsNullOrWhiteSpace(OctopusUserName) || !string.IsNullOrWhiteSpace(AzureEmailAddress)) { // Create new user to update object var userToUpdate = new UserToUpdate(); userToUpdate.AzureDisplayName = AzureDisplayName; userToUpdate.AzureEmailAddress = AzureEmailAddress; userToUpdate.OctopusUserName = OctopusUserName; // Add to collection usersToUpdate.Add(userToUpdate); } } else { // Read from csv using (var reader = new System.IO.StreamReader(Path)) { while (!reader.EndOfStream) { var line = reader.ReadLine(); var columns = line.Split(','); // Create new user to update object var userToUpdate = new UserToUpdate(); userToUpdate.AzureDisplayName = columns[0]; userToUpdate.AzureEmailAddress = columns[1]; userToUpdate.OctopusUserName = columns[2]; // Add to collection usersToUpdate.Add(userToUpdate); } } } // Check to see if we have anything to update if (usersToUpdate.Count > 0) { Console.WriteLine(string.Format("Users to update: {0}", usersToUpdate.Count)); // Loop through collection foreach (var userToUpdate in usersToUpdate) { Console.WriteLine(string.Format("Searching for user {0}", userToUpdate.OctopusUserName)); var existingOctopusUser = client.Repository.Users.FindByUsername(userToUpdate.OctopusUserName); // Check to see if something was returned if (null != existingOctopusUser) { // Check to see if it is a service account if (existingOctopusUser.IsService) { Console.WriteLine(string.Format("{0} is a service account, skipping ...", userToUpdate.OctopusUserName)); continue; } // Check to see if user is active if (!existingOctopusUser.IsActive) { Console.WriteLine(string.Format("{0} is not an active account, skipping ...", userToUpdate.OctopusUserName)); } // Get existing Microsoft Entra ID identity, if exists var azureAdIdentity = existingOctopusUser.Identities.FirstOrDefault(i => i.IdentityProviderName == "Azure AD"); // Check to see if something was returned if(null != azureAdIdentity) { // Check to see if force update was set if (Force) { Console.WriteLine(string.Format("Force set to true, replacing existing entries for {0}", userToUpdate.OctopusUserName)); azureAdIdentity.Claims["email"].Value = userToUpdate.AzureEmailAddress; azureAdIdentity.Claims["dn"].Value = userToUpdate.AzureDisplayName; } } else { Console.WriteLine(string.Format("No existing AzureAD login found for user {0}", userToUpdate.OctopusUserName)); // Create new octopus objects var newAzureIdentity = new Octopus.Client.Model.IdentityResource(); newAzureIdentity.IdentityProviderName = "Azure AD"; var newEmailClaim = new Octopus.Client.Model.IdentityClaimResource(); newEmailClaim.IsIdentifyingClaim = true; newEmailClaim.Value = userToUpdate.AzureEmailAddress; newAzureIdentity.Claims.Add("email", newEmailClaim); var newDisplayNameClaim = new Octopus.Client.Model.IdentityClaimResource(); newDisplayNameClaim.IsIdentifyingClaim = false; newDisplayNameClaim.Value = userToUpdate.AzureDisplayName; newAzureIdentity.Claims.Add("dn", newDisplayNameClaim); // Add identity object to user var identityCollection = new System.Collections.Generic.List(existingOctopusUser.Identities); identityCollection.Add(newAzureIdentity); existingOctopusUser.Identities = identityCollection.ToArray(); } if (UpdateOctopusDisplayName && !string.IsNullOrWhiteSpace(userToUpdate.AzureDisplayName)) { Console.WriteLine(string.Format("Setting Octopus Display Name to: {0}", userToUpdate.AzureDisplayName)); existingOctopusUser.DisplayName = userToUpdate.AzureDisplayName; } if (UpdateOctopusEmail && !string.IsNullOrWhiteSpace(userToUpdate.AzureEmailAddress)) { Console.WriteLine(string.Format("Setting Octopus Email Address to: {0}", userToUpdate.AzureEmailAddress)); existingOctopusUser.EmailAddress = userToUpdate.AzureEmailAddress; } if (WhatIf) { Console.WriteLine(string.Format("WhatIf is set to true, skipping update of user: {0}", userToUpdate.OctopusUserName)); Console.WriteLine(existingOctopusUser); } else { // Update account Console.WriteLine(string.Format("Updating: {0}", userToUpdate.OctopusUserName)); client.Repository.Users.Modify(existingOctopusUser); } } } } } ```
Python3 ```python import json import requests import csv # Define class class userToUpdate: OctopusUsername = '' AzureEmailAddress = '' AzureDisplayName = '' # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' # Create function def AddAzureLogins(OctopusUrl, OctopusAPIKey, Path='', OctopusUsername='', AzureEmailAddress='', AzureDisplayName='', UpdateOctopusEmailAddress=False, UpdateOctopusDisplayName=False, Force=False, WhatIf=False): # Display values passed into function print ("OctopusURL: ", OctopusUrl) print ("OctopusAPIKey: ", "*******") print ("Path: ", Path) print ("OctopusUsername: ", OctopusUsername) print ("AzureEmailAddress: ", AzureEmailAddress) print ("AzureDisplayName: ", AzureDisplayName) print ("UpdateOctopusEmailAddress", UpdateOctopusEmailAddress) print ("UpdateOctopusDisplayName: ", UpdateOctopusDisplayName) headers = {'X-Octopus-ApiKey': OctopusAPIKey} usersToUpdate = [] if Path: # Write something to do extraction with open(Path) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') for row in csv_reader: updateUser = userToUpdate() updateUser.AzureDisplayName = row[0] updateUser.AzureEmailAddress = row[1] updateUser.OctopusUsername = row[3] usersToUpdate.append(updateUser) else: updateUser = userToUpdate() updateUser.AzureDisplayName = AzureDisplayName updateUser.AzureEmailAddress = AzureEmailAddress updateUser.OctopusUsername = OctopusUsername usersToUpdate.append(updateUser) # Gather users from instance existingUsers = [] uri = '{0}/users'.format(OctopusUrl) response = requests.get(uri, headers=headers) response.raise_for_status() # Decode content results = json.loads(response.content.decode('utf-8')) existingUsers += results["Items"] # Loop through remaining results while ("Page.Next" in results["Links"]): response = requests.get(uri, headers=headers) response.raise_for_status() # Decode content results = json.loads(response.content.decode('utf-8')) existingUsers += results["Items"] for user in usersToUpdate: # Search for user existingUser = next((u for u in existingUsers if u["Username"] == user.OctopusUsername), None) if (existingUser != None): print(existingUser) # Check to see if user is a service account if (existingUser["IsService"] == True): print (f"User {user.OctopusUsername} is a service account, skipping ...") continue if (existingUser["IsActive"] == False): print (f"User {user.OctopusUsername} is inactive, skipping...") continue if (existingUser["Identities"] != None): azureAdIdentity = next((u for u in existingUser["Identities"] if u["IdentityProviderName"] == "Azure AD"), None) if (azureAdIdentity != None): print (f"Found existing Microsoft Entra ID identity for {user.OctopusUsername} ...") if(Force): print("Force is set to true, overwriting values") azureAdIdentity["Claims"]["email"]["Value"] = user.AzureEmailAddress azureAdIdentity["Claims"]["dn"]["Value"] = user.AzureDisplayName else: print("Force is set to false, skipping...") continue else: # Create new Identity newIdentity = { 'IdentityProviderName': 'Azure AD', 'Claims': { 'email': { 'Value': user.AzureEmailAddress, 'IsIdentifyingClaim': True }, 'dn':{ 'Value': user.AzureDisplayName, 'IsIdentifyingClaim': False } } } existingUser["Identities"].append(newIdentity) if (UpdateOctopusEmailAddress): existingUser["EmailAddress"] = user.AzureEmailAddress if (UpdateOctopusDisplayName): existingUser["DisplayName"] = user.AzureDisplayName # Update the user account uri = '{0}/users/{1}'.format(OctopusUrl, existingUser['Id']) response = requests.put(uri, headers=headers, json=existingUser) response.raise_for_status() return AddAzureLogins(octopus_server_uri, octopus_api_key, OctopusUsername='some.email@microsoft.com', AzureDisplayName='DisplayName', AzureEmailAddress='some.email@microsoft.com', Force=True ) ```
Go ```go package main import ( "fmt" "log" "os" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" "encoding/csv" "io" ) type User struct { OctopusUsername string AzureEmailAddress string AzureDisplayName string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" Path := "" Users := []User{} OctopusUsername := "" AzureEmailAddress := "" AzureDisplayName := "" OverwriteEmailAddress := false OverwriteDisplayName := false if Path != "" { Users = GetCSVData(Path) } else { u := User{OctopusUsername: OctopusUsername, AzureEmailAddress: AzureEmailAddress, AzureDisplayName: AzureDisplayName} Users = append(Users, u) } for i := 0; i < len(Users); i++ { // Get existing user account existingUser := GetUser(apiURL, APIKey, Users[i].OctopusUsername) // Check to see if something was returned if existingUser != nil { fmt.Println("Found " + existingUser.Username) // Check to see if it has an identity if existingUser.Identities != nil { identityIndex := -1 // Loop through Identities collection for j := 0; j < len(existingUser.Identities); j++ { if existingUser.Identities[i].IdentityProviderName == "Azure AD" { fmt.Println("User has existing Microsoft Entra ID identity") identityIndex = j break } } if identityIndex > -1 { if OverwriteDisplayName { existingUser.DisplayName = Users[i].AzureDisplayName } if OverwriteEmailAddress { existingUser.EmailAddress = Users[i].AzureEmailAddress } } else { // Create new identity object claimsCollection := make(map[string]octopusdeploy.IdentityClaim) emailClaim := octopusdeploy.IdentityClaim{Value: Users[i].AzureEmailAddress, IsIdentifyingClaim: true} displayNameClaim := octopusdeploy.IdentityClaim{Value: Users[i].AzureDisplayName, IsIdentifyingClaim: false} claimsCollection["email"] = emailClaim claimsCollection["dn"] = displayNameClaim octopusIdentity := octopusdeploy.Identity{IdentityProviderName: "Azure AD", Claims: claimsCollection} // Add new identity existingUser.Identities = append(existingUser.Identities, octopusIdentity) } // Update user account client := octopusAuth(apiURL, APIKey, "") existingUser, err = client.Users.Update(existingUser) if err != nil { log.Println(err) } } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetUser(octopusURL *url.URL, APIKey string, OctopusUserName string) *octopusdeploy.User { // Get client client := octopusAuth(octopusURL, APIKey, "") // Get user account userQuery := octopusdeploy.UsersQuery{ Filter: OctopusUserName, } userAccounts, err := client.Users.Get(userQuery) if err != nil { log.Println(err) } for i := 0; i < len(userAccounts.Items); i++ { // Check to see if it's a match if userAccounts.Items[i].Username == OctopusUserName { return userAccounts.Items[i] } } return nil } func GetCSVData(Path string) []User { recordFile, err := os.Open(Path) if err != nil { log.Println(err) } Users := []User{} reader := csv.NewReader(recordFile) reader.Comma = ',' for i := 0; ; i++ { record, err := reader.Read() if err == io.EOF { break } userAccount := User{OctopusUsername: record[0], AzureEmailAddress: record[1], AzureDisplayName: record[2]} Users = append(Users, userAccount) } return Users } ```
# Add domain teams Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/add-domain-teams.md This script demonstrates how to programmatically add teams from a new domain to existing Octopus teams. This can be useful when you are migrating from one domain to another. ## Usage Provide values for: - Octopus URL - Octopus API Key - Maximum number of records to update - Name of new to domain to use ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop" $octopusURL = "https://your-octopus-url.com" # Replace with your instance URL $octopusAPIKey = "API-YOUR-KEY" # Replace with a service account API Key $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $maxRecordsToUpdate = 2 # The max number of records you want to update in this batch $newDomainToLookup = "Work" # Change this to the new domain $skipIndex = 0 $recordsToBringBack = 30 $recordsUpdated = 0 while (1 -eq 1) #Continue until we reach the end of the user list or until we go over the max records to update { Write-Host "Pulling teams starting at index $skipIndex and getting a max of $recordsToBringBack records back" $teamList = Invoke-RestMethod -Method GET -Uri "$OctopusUrl/api/teams?skip=$skipIndex&take=$recordsToBringBack" -Headers $header #Update to pull back the next batch of users $skipIndex = $skipIndex + $recordsToBringBack if ($teamList.Items.Count -eq 0) { break } foreach ($team in $teamList.Items) { if ($team.ExternalSecurityGroups.Count -eq 0) { # Skip teams which don't have an external AD group continue } Write-Host "Checking to see if $($team.Name) is tied to an external active directory team." $activeDirectoryRecordsToAdd = @() foreach ($externalSecurityGroup in $team.ExternalSecurityGroups) { $externalName = $externalSecurityGroup.DisplayName if ($null -eq $externalName) { continue } $teamNameToFind = "$newDomainToLookup\$externalName" $directoryServicesResults = Invoke-RestMethod -Method GET -Uri "$octopusURL/api/externalgroups/directoryServices?partialName=$([System.Web.HTTPUtility]::UrlEncode($teamNameToFind))" -Headers $header foreach ($result in $directoryServicesResults) { if ($result.DisplayName -eq $externalName) { Write-Host "Found a matching team name, checking if the SID is already assigned to the team" $foundMatch = $false foreach ($group in $team.ExternalSecurityGroups) { if ($group.Id -eq $result.Id) { $foundMatch = $true break } } if ($foundMatch -eq $false) { $activeDirectoryRecordsToAdd += $result } else { Write-Host "The active directory group already existed on the team" } break } } } if ($activeDirectoryRecordsToAdd.Length -gt 0) { foreach ($teamToAdd in $activeDirectoryRecordsToAdd) { $team.ExternalSecurityGroups += $teamToAdd } Write-Host "Updating the team $($Team.Name) in Octopus Deploy" Invoke-RestMethod -Method PUT -Uri "$OctopusUrl/api/teams/$($team.Id)" -Headers $header -Body $($team | ConvertTo-Json -Depth 10) $recordsUpdated += 1 } } if ($recordsUpdated -ge $maxRecordsToUpdate) { Write-Host "Reached the maximum number of records to update, stopping" break } } ```
# Add an environment to a team Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/add-environment-to-team.md This script demonstrates how to programmatically add an environment to a user role for a team. ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the space to work with - Name of the team - Name of the user role - Array of environment names ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $teamName = "MyTeam" $userRoleName = "Deployment creator" $environmentNames = @("Development", "Staging") $environmentIds = @() # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get team $team = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/teams/all" -Headers $header) | Where-Object {$_.Name -eq $teamName} # Get user role $userRole = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/userroles/all" -Headers $header) | Where-Object {$_.Name -eq $userRoleName} # Get scoped user role reference $scopedUserRole = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/teams/$($team.Id)/scopeduserroles" -Headers $header).Items | Where-Object {$_.UserRoleId -eq $userRole.Id} # Get Environments $environments = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/environments/all" -Headers $header) | Where-Object {$environmentNames -contains $_.Name} foreach ($environment in $environments) { $environmentIds += $environment.Id } # Update the scopedUserRole $scopedUserRole.EnvironmentIds += $environmentIds Invoke-RestMethod -Method Put -Uri "$octopusURL/api/scopeduserroles/$($scopedUserRole.Id)" -Headers $header -Body ($scopedUserRole | ConvertTo-Json -Depth 10) ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $environmentNames = @("Test", "Production") $teamName = "MyTeam" $userRoleName = "Deployment creator" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint $environmentIds = @() try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get team $team = $repositoryForSpace.Teams.FindByName($teamName) # Get user role $userRole = $repositoryForSpace.UserRoles.FindByName($userRoleName) # Get scoped user role $scopedUserRole = $repositoryForSpace.Teams.GetScopedUserRoles($team) | Where-Object {$_.UserRoleId -eq $userRole.Id} # Get environments $environments = $repositoryForSpace.Environments.GetAll() | Where-Object {$environmentNames -contains $_.Name} foreach ($environment in $environments) { # Add Id $scopedUserRole.EnvironmentIds.Add($environment.Id) } # Update the scoped user role object $repositoryForSpace.ScopedUserRoles.Modify($scopedUserRole) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string[] environmentNames = { "Development", "Production" }; string teamName = "MyTeam"; string userRoleName = "Deployment creator"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get team var team = repositoryForSpace.Teams.FindByName(teamName); // Get user role var userRole = repository.UserRoles.FindByName(userRoleName); // Get scoped user role var scopedUserRole = repository.Teams.GetScopedUserRoles(team).FirstOrDefault(s => s.UserRoleId == userRole.Id); // Get environment ids foreach (var environmentName in environmentNames) { scopedUserRole.EnvironmentIds.Add(repositoryForSpace.Environments.FindByName(environmentName).Id); } // Update scoped user role repositoryForSpace.ScopedUserRoles.Modify(scopedUserRole); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' team_name = 'MyTeam' user_role_name = 'MyRole' environment_names = ['List', 'of', 'environment names'] uri = '{0}/spaces/all'.format(octopus_server_uri) response = requests.get(uri, headers=headers) response.raise_for_status() # Get space spaces = json.loads(response.content.decode('utf-8')) space = next((x for x in spaces if x['Name'] == space_name), None) # Get team uri = '{0}/{1}/teams'.format(octopus_server_uri, space['Id']) response = requests.get(uri, headers=headers) response.raise_for_status teams = json.loads(response.content.decode('utf-8')) team = next((x for x in teams['Items'] if x['Name'] == team_name), None) # Get user role uri = '{0}/userroles'.format(octopus_server_uri) response = requests.get(uri, headers=headers) response.raise_for_status user_roles = json.loads(response.content.decode('utf-8')) user_role = next((x for x in user_roles['Items'] if x['Name'] == user_role_name), None) # Get scoped user role uri = '{0}/{1}/teams/{2}/scopeduserroles'.format(octopus_server_uri, space['Id'], team['Id']) response = requests.get(uri, headers=headers) response.raise_for_status scoped_user_roles = json.loads(response.content.decode('utf-8')) scoped_user_role = next((x for x in scoped_user_roles['Items'] if x['UserRoleId'] == user_role['Id']), None) # Get environments uri = '{0}/{1}/environments'.format(octopus_server_uri, space['Id']) response = requests.get(uri, headers=headers) response.raise_for_status environments = json.loads(response.content.decode('utf-8')) # Loop through environment names for environment_name in environment_names: environment = next((x for x in environments['Items'] if x['Name'] == environment_name), None) scoped_user_role['EnvironmentIds'].append(environment['Id']) # Update the user role uri = '{0}/{1}/scopeduserroles/{2}'.format(octopus_server_uri, space['Id'], scoped_user_role['Id']) response = requests.put(uri, headers=headers, json=scoped_user_role) response.raise_for_status ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" environmentNames := []string{"Development", "Production"} teamName := "MyTeam" userRoleName := "MyRole" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Get reference to team team := GetTeam(apiURL, APIKey, space, teamName, 0) // Get reference to user role userRole := GetRole(apiURL, APIKey, space, userRoleName) // Get scoped user role scopedUserRole := GetScopedUserRole(apiURL, APIKey, space, userRole, team) // Get references to environments for i := 0; i < len(environmentNames); i++ { environment := GetEnvironment(apiURL, APIKey, space, environmentNames[i]) //environments = append(environments, *environment) scopedUserRole.EnvironmentIDs = append(scopedUserRole.EnvironmentIDs, environment.ID) } // Update scoped user role client := octopusAuth(apiURL, APIKey, space.ID) client.ScopedUserRoles.Update(scopedUserRole) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetEnvironment(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, environmentName string) *octopusdeploy.Environment { // Get client for space client := octopusAuth(octopusURL, APIKey, space.ID) // Get environment environmentsQuery := octopusdeploy.EnvironmentsQuery { Name: environmentName, } environments, err := client.Environments.Get(environmentsQuery) if err != nil { log.Println(err) } // Loop through results for _, environment := range environments.Items { if environment.Name == environmentName { return environment } } return nil } func GetTeam(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, TeamName string, skip int) *octopusdeploy.Team { client := octopusAuth(octopusURL, APIKey, space.ID) // Create query teamsQuery := octopusdeploy.TeamsQuery{ PartialName: TeamName, Spaces: []string{space.ID}, } // Query for team teams, err := client.Teams.Get(teamsQuery) if err != nil { log.Println(err) } if len(teams.Items) == teams.ItemsPerPage { // call again team := GetTeam(client, space, TeamName, (skip + len(teams.Items))) if team != nil { return team } } else { // Loop through returned items for _, team := range teams.Items { if team.Name == TeamName { return team } } } return nil } func GetRole(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, RoleName string) *octopusdeploy.UserRole { client := octopusAuth(octopusURL, APIKey, space.ID) // Get user account userRoleQuery := octopusdeploy.UserRolesQuery{ PartialName: RoleName, } userRoles, err := client.UserRoles.Get(userRoleQuery) if err != nil { log.Println(err) } for i := 0; i < len(userRoles.Items); i++ { if userRoles.Items[i].Name == RoleName { fmt.Println("Retrieved UserRole " + userRoles.Items[i].Name) return userRoles.Items[i] } } return nil } func GetScopedUserRole(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, userRole *octopusdeploy.UserRole, team *octopusdeploy.Team) *octopusdeploy.ScopedUserRole { client := octopusAuth(octopusURL, APIKey, space.ID) /* There is a bug currently where the Get() method doesn't take the query as a parameter, once that has been fixed, this block will work */ //scopedUserRoleQuery := octopusdeploy.ScopedUserRolesQuery { // PartialName: userRole.Name, //} // Get scoped user role scopedUserRoles, err := client.ScopedUserRoles.Get() if err != nil { log.Println(err) } // Loop through results to find the correct one for i := 0; i < len(scopedUserRoles.Items); i++ { if scopedUserRoles.Items[i].UserRoleID == userRole.ID { return scopedUserRoles.Items[i] } } return nil } func contains(s []string, str string) bool { for _, v := range s { if v == str { return true } } return false } ```
# Change users domain Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/change-users-domain.md This script demonstrates how to programmatically change an Octopus user's Active Directory domain assignment. This can be useful when you are migrating from one domain to another. ## Usage Provide values for: - Octopus URL - Octopus API Key - Maximum number of records to update - Name of the old domain to search for - Name of new to domain to use in place of the old domain ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop" $octopusURL = "https://your-octopus-url.com" # Replace with your instance URL $octopusAPIKey = "API-YOUR-KEY" # Replace with a service account API Key $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $maxRecordsToUpdate = 2 # The max number of records you want to update in this batch $oldDomainToLookFor = "Home" # Change this to the old domain $newDomainToLookup = "Work" # Change this to the new domain $skipIndex = 0 $recordsToBringBack = 30 $recordsUpdated = 0 while (1 -eq 1) #Continue until we reach the end of the user list or until we go over the max records to update { Write-Host "Pulling users starting at index $skipIndex and getting a max of $recordsToBringBack records back" $userList = Invoke-RestMethod -Method GET -Uri "$OctopusUrl/api/users?skip=$skipIndex&take=$recordsToBringBack" -Headers $header #Update to pull back the next batch of users $skipIndex = $skipIndex + $recordsToBringBack if ($userList.Items.Count -eq 0) { break } foreach ($user in $userList.Items) { if ($user.IsService -eq $true -or $user.Identities.Count -eq 0) { # Skip Octopus Deploy Service Accounts or users not tied to an active directory account continue } Write-Host "Checking to see if $($user.UserName) has an active directory account." $replaceActiveDirectoryRecord = $false for ($i = 0; $i -lt $user.Identities.Count; $i++) { if ($user.Identities[$i].IdentityProviderName -ne "Active Directory") { # We only care about active directory stuff continue } Write-Host "$($user.UserName) has an active directory account, pulling out the domain name." $claimList = $user.Identities[$i].Claims | Get-Member | where {$_.MemberType -eq "NoteProperty"} | Select-Object -Property "Name" foreach ($claimName in $claimList) { $nameValue = $claimName.Name $claim = $user.Identities[$i].Claims.$nameValue if ($claim.Value.ToLower().Contains($oldDomainToLookFor.ToLower())) { Write-Host "The claim $nameValue for $($user.UserName) has the value $($claim.Value) which matches $oldDomainToLookFor. Updating this account." ## This would be a good place to add additional AD lookup logic $replaceActiveDirectoryRecord = $true break } } if ($replaceActiveDirectoryRecord -eq $true) { break } } if ($replaceActiveDirectoryRecord -eq $true) { # This user record needs to be updated, clone the user object so we can manipulate it (and so we have the original) $userRecordToUpdate = $user | ConvertTo-Json -Depth 10 | ConvertFrom-Json # Grab any records that are not active directory $filteredOldRecords = $user.Identities | Where-Object {$_.IdentityProviderName -ne "Active Directory"} if ($null -ne $filteredOldRecords) { $userRecordToUpdate.Identities = @($filteredOldRecords) } else { $userRecordToUpdate.Identities = @() } # Now let's find the new domain account $userNameToLookUp = "$newDomainToLookup\$($userRecordToUpdate.Username)" $expectedMatch = "$($userRecordToUpdate.Username)@$newDomainToLookUp.local" $foundUser = $false Write-Host "Looking up the new domain account $userNameToLookup in Octopus Deploy" $directoryServicesResults = Invoke-RestMethod -Method GET -Uri "$octopusURL/api/externalusers/directoryServices?partialName=$([System.Web.HTTPUtility]::UrlEncode($userNameToLookUp))" -Headers $header foreach ($identity in $directoryServicesResults.Identities) { if ($identity.IdentityProviderName -eq "Active Directory") { $claimList = $identity.Claims | Get-Member | where {$_.MemberType -eq "NoteProperty"} | Select-Object -Property "Name" foreach ($claimName in $claimList) { $claimName = $claimName.Name $claim = $identity.Claims.$ClaimName if ($claim.Value.ToLower() -eq $expectedMatch.ToLower() -and $claim.IsIdentifyingClaim -eq $true) { Write-Host "Found the user's new domain record, add that to Octopus Deploy" $userRecordToUpdate.Identities += $identity $foundUser = $true break } } if ($foundUser) { break } } } if ($foundUser -eq $true) { Write-Host "Updating the user $($UserRecordToUpdate.UserName) in Octopus Deploy" Invoke-RestMethod -Method PUT -Uri "$OctopusUrl/api/users/$($userRecordToUpdate.Id)" -Headers $header -Body $($userRecordToUpdate | ConvertTo-Json -Depth 10) $recordsUpdated += 1 } } } if ($recordsUpdated -ge $maxRecordsToUpdate) { Write-Host "Reached the maximum number of records to update, stopping" break } } ```
# Create an API Key Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/create-api-key.md This script demonstrates how to programmatically create a new API Key. :::div{.warning} **Note:** You can only create a new API Key for your own user account. You will also need an existing API Key to authenticate with the Octopus REST API, created from the [Octopus Web Portal](/docs/octopus-rest-api/how-to-create-an-api-key). ::: ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the user to create the API Key for - Description of the API Key's purpose ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # UserName of the user for which the API key will be created. You can check this value from the web portal under Configuration/Users $UserName = "" #Purpose of the API Key. This field is mandatory. $APIKeyPurpose = "" # Create payload $body = @{ Purpose = $APIKeyPurpose } | ConvertTo-Json # Getting all users to filter target user by name $allUsers = (Invoke-WebRequest "$OctopusURL/api/users/all" -Headers $header -Method Get).content | ConvertFrom-Json # Getting user that owns API Key. $User = $allUsers | Where-Object { $_.username -eq $UserName } # Creating API Key $CreateAPIKeyResponse = (Invoke-WebRequest "$OctopusURL/api/users/$($User.id)/apikeys" -Method Post -Headers $header -Body $body -Verbose).content | ConvertFrom-Json # Printing new API Key Write-Output "API Key created: $($CreateAPIKeyResponse.apikey)" ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "C:\octo\Octopus.Client.dll" # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" # Purpose of the API Key. This field is mandatory. $APIKeyPurpose = "" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint try { # Get Current user $User = $repository.Users.GetCurrent() # Create API Key for user $ApiKeyResponse = $repository.Users.CreateApiKey($User, $APIKeyPurpose) # Return the API Key Write-Output "API Key created: $($ApiKeyResponse.ApiKey)" } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions // Reference Octopus.Client //#r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string apiKeyPurpose = "Key used with C# application"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); try { // Get Current user var user = repository.Users.GetCurrent(); // Create API Key for user var apiKeyResponse = repository.Users.CreateApiKey(user, apiKeyPurpose); // Return the API Key Console.WriteLine("API Key created: {0}", apiKeyResponse.ApiKey); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' user_name = 'MyUser' purpose = 'Descriptive purpose' # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get user uri = '{0}/api/users'.format(octopus_server_uri) users = get_octopus_resource(uri, headers) user = next((x for x in users if x['Username'] == user_name), None) # Create API key apiKey = { 'Purpose': purpose } uri = '{0}/api/users/{1}/apikeys'.format(octopus_server_uri, user['Id']) response = requests.post(uri, headers=headers, json=apiKey) response.raise_for_status() ```
Go ```go package main import ( "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" userName := "MyUser" purpose := "Descriptive purpose" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get user user := GetUser(client, userName) userApiKey := octopusdeploy.NewAPIKey(purpose, user.ID) userApiKey, err = client.APIKeys.Create(userApiKey) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetUser(client *octopusdeploy.Client, OctopusUserName string) *octopusdeploy.User { // Get user account userQuery := octopusdeploy.UsersQuery{ Filter: OctopusUserName, } userAccounts, err := client.Users.Get(userQuery) if err != nil { log.Println(err) } for i := 0; i < len(userAccounts.Items); i++ { // Check to see if it's a match if userAccounts.Items[i].Username == OctopusUserName { return userAccounts.Items[i] } } return nil } ```
Java ```java import com.octopus.sdk.Repository; import com.octopus.sdk.api.ApiKeyApi; import com.octopus.sdk.domain.User; import com.octopus.sdk.http.ConnectData; import com.octopus.sdk.http.OctopusClient; import com.octopus.sdk.http.OctopusClientFactory; import com.octopus.sdk.model.apikey.ApiKeyCreatedResource; import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.time.Clock; import java.time.Duration; import java.time.OffsetDateTime; import java.time.ZoneId; public class CreateApiKey { static final String octopusServerUrl = "http://localhost:8065"; // as read from your profile in your Octopus Deploy server static final String apiKey = System.getenv("OCTOPUS_SERVER_API_KEY"); public static void main(final String... args) throws IOException { final OctopusClient client = createClient(); final Repository repo = new Repository(client); final User theUser = repo.users().getCurrentUser(); final ApiKeyApi apiKeyApi = ApiKeyApi.create(client, theUser.getProperties()); final ApiKeyCreatedResource apiKey = apiKeyApi.addApiKey( "For Use In testing", OffsetDateTime.now(Clock.system(ZoneId.systemDefault())).plus(Duration.ofDays(365))); // Api keys should not be logged to output in production systems System.out.println("The Key is " + apiKey.getApiKey()); } // Create an authenticated connection to your Octopus Deploy Server private static OctopusClient createClient() throws MalformedURLException { final Duration connectTimeout = Duration.ofSeconds(10L); final ConnectData connectData = new ConnectData(new URL(octopusServerUrl), apiKey, connectTimeout); final OctopusClient client = OctopusClientFactory.createClient(connectData); return client; } } ```
# Find teams with role Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/find-teams-with-role.md This script demonstrates how to programmatically find all teams using a specific role. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to work with - Name of the user role ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "default" $userRoleName = "Deployment creator" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get user role $userRole = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/userroles/all" -Headers $header) | Where-Object {$_.Name -eq $userRoleName} # Get teams collection $teams = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/teams/all" -Headers $header # Loop through teams $teamNames = @() foreach ($team in $teams) { # Get scoped roles for team $scopedUserRole = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/teams/$($team.Id)/scopeduserroles" -Headers $header).Items | Where-Object {$_.UserRoleId -eq $userRole.Id} # Check for null if ($null -ne $scopedUserRole) { # Add to teams $teamNames += $team.Name } } # Loop through results Write-Host "The following teams are using role $($userRoleName):" foreach ($teamName in $teamNames) { Write-Host "$teamName" } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "c:\octopus.client\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $userRoleName = "Deployment creator" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get team $teams = $repositoryForSpace.Teams.FindAll() # Get user role $userRole = $repositoryForSpace.UserRoles.FindByName($userRoleName) # Loop through teams $teamNames = @() foreach ($team in $teams) { # Get scoped user role $scopedUserRole = $repositoryForSpace.Teams.GetScopedUserRoles($team) | Where-Object {$_.UserRoleId -eq $userRole.Id} # Check for null if ($null -ne $scopedUserRole) { # Add to list $teamNames += $team.Name } } # Loop through results Write-Host "The following teams are using role $($userRoleName):" foreach ($teamName in $teamNames) { Write-Host "$teamName" } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string userRoleName = "Deployment creator"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get teams var teams = repositoryForSpace.Teams.FindAll(); // Get user role var userRole = repository.UserRoles.FindByName(userRoleName); // Loop through teams List teamNames = new List(); foreach (var team in teams) { // Get scoped user roles var scopedUserRoles = repositoryForSpace.Teams.GetScopedUserRoles(team).Where(s => s.UserRoleId == userRole.Id); // Check for null if (scopedUserRoles != null && scopedUserRoles.Count() > 0) { // Add to teams teamNames.Add(team.Name); } } // Display which teams have use the role Console.WriteLine(string.Format("The following teams are using role {0}", userRoleName)); foreach (string teamName in teamNames) { Console.WriteLine(teamName); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" role_name = "Project deployer" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get user role uri = '{0}/api/userroles'.format(octopus_server_uri) user_roles = get_octopus_resource(uri, headers) user_role = next((x for x in user_roles if x['Name'] == role_name), None) # Get teams uri = '{0}/api/{1}/teams'.format(octopus_server_uri, space['Id']) teams = get_octopus_resource(uri, headers) teams_with_role = [] # Loop through teams for team in teams: # Get the scoped user roles uri = '{0}/api/{1}/teams/{2}/scopeduserroles'.format(octopus_server_uri, space['Id'], team['Id']) scoped_user_roles = get_octopus_resource(uri, headers) for role in scoped_user_roles: if role['UserRoleId'] == user_role['Id']: teams_with_role.append(team) print("The following teams are using role {0}".format(user_role['Name'])) for team in teams_with_role: print (team['Name']) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" userRoleName := "Project deployer" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get teams teams, err := client.Teams.GetAll() if err != nil { log.Println(err) } // Get user roles userRole := GetUserRole(client, userRoleName) if err != nil { log.Println(err) } teamsUsingRole := []*octopusdeploy.Team{} // Loop through teams for _, team := range teams { // Get scoped user roles for team scopedUserRoles, err := client.Teams.GetScopedUserRolesByID(team.ID) if err != nil { log.Println(err) } for _, scopedUserRole := range scopedUserRoles.Items { if scopedUserRole.UserRoleID == userRole.ID { teamsUsingRole = append(teamsUsingRole, team) break } } } fmt.Println("The following teams are using the role " + userRole.Name) for _, team := range teamsUsingRole { fmt.Println(team.Name) } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetUserRole(client *octopusdeploy.Client, userRoleName string) *octopusdeploy.UserRole { // Get all roles userRoles, err := client.UserRoles.GetAll() if err != nil { log.Println(err) } for _, userRole := range userRoles { if userRole.Name == userRoleName { return userRole } } return nil } ```
# List users Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/list-users.md This script will list all active users in an Octopus instance. In addition, there are a number of optional items you can include: - scoped user roles - any associated [Active Directory](/docs/security/authentication/active-directory) details - any associated [Azure Active Directory](/docs/security/authentication/azure-ad-authentication) details - inactive users ## Usage Provide values for: - Octopus URL - Octopus API Key - (Optional) whether or not to include user role details - (Optional) whether or not to include Active Directory details - (Optional) whether or not to include Azure Active Directory details - (Optional) whether or not to include disabled users - (Optional) path to export the results to a csv file ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Optional: include user role details? $includeUserRoles = $False # Optional: include non-active users in output $includeNonActiveUsers = $False # Optional: include AD details $includeActiveDirectoryDetails = $False # Optional: include AAD details $includeAzureActiveDirectoryDetails = $False # Optional: set a path to export to csv $csvExportPath = "" $users = @() $usersList = @() $response = $null do { $uri = if ($response) { $octopusURL + $response.Links.'Page.Next' } else { "$octopusURL/api/users" } $response = Invoke-RestMethod -Method Get -Uri $uri -Headers $header $usersList += $response.Items } while ($response.Links.'Page.Next') # Filter non-active users if($includeNonActiveUsers -eq $False) { Write-Host "Filtering users who aren't active from results" $usersList = $usersList | Where-Object {$_.IsActive -eq $True} } # If we are including user roles, need to get team details if($includeUserRoles -eq $True) { $teams = @() $response = $null do { $uri = if ($response) { $octopusURL + $response.Links.'Page.Next' } else { "$octopusURL/api/teams" } $response = Invoke-RestMethod -Method Get -Uri $uri -Headers $header $teams += $response.Items } while ($response.Links.'Page.Next') foreach($team in $teams) { $scopedUserRoles = Invoke-RestMethod -Method Get -Uri ("$octopusURL/api/teams/$($team.Id)/scopeduserroles") -Headers $header $team | Add-Member -MemberType NoteProperty -Name "ScopedUserRoles" -Value $scopedUserRoles.Items } $allUserRoles = @() $response = $null do { $uri = if ($response) { $octopusURL + $response.Links.'Page.Next' } else { "$octopusURL/api/userroles" } $response = Invoke-RestMethod -Method Get -Uri $uri -Headers $header $allUserRoles += $response.Items } while ($response.Links.'Page.Next') $spaces = @() $response = $null do { $uri = if ($response) { $octopusURL + $response.Links.'Page.Next' } else { "$octopusURL/api/spaces" } $response = Invoke-RestMethod -Method Get -Uri $uri -Headers $header $spaces += $response.Items } while ($response.Links.'Page.Next') } foreach($userRecord in $usersList) { $usersRoles = @() $user = [PSCustomObject]@{ Id = $userRecord.Id Username = $userRecord.Username DisplayName = $userRecord.DisplayName IsActive = $userRecord.IsActive IsService = $userRecord.IsService EmailAddress = $userRecord.EmailAddress } if($includeActiveDirectoryDetails -eq $True) { $user | Add-Member -MemberType NoteProperty -Name "AD_Upn" -Value $null $user | Add-Member -MemberType NoteProperty -Name "AD_Sam" -Value $null $user | Add-Member -MemberType NoteProperty -Name "AD_Email" -Value $null } if($includeAzureActiveDirectoryDetails -eq $True) { $user | Add-Member -MemberType NoteProperty -Name "AAD_DN" -Value $null $user | Add-Member -MemberType NoteProperty -Name "AAD_Email" -Value $null } if($includeUserRoles -eq $True) { $usersTeams = $teams | Where-Object {$_.MemberUserIds -icontains $user.Id} foreach($userTeam in $usersTeams) { $roles = $userTeam.ScopedUserRoles foreach($role in $roles) { $userRole = $allUserRoles | Where-Object {$_.Id -eq $role.UserRoleId} | Select-Object -First 1 $roleName = "$($userRole.Name)" $roleSpace = $spaces | Where-Object {$_.Id -eq $role.SpaceId} | Select-Object -First 1 if (![string]::IsNullOrWhiteSpace($roleSpace)) { $roleName += " ($($roleSpace.Name))" } $usersRoles+= $roleName } } $user | Add-Member -MemberType NoteProperty -Name "ScopedUserRoles" -Value ($usersRoles -Join "|") } if($userRecord.Identities.Count -gt 0) { if($includeActiveDirectoryDetails -eq $True) { $activeDirectoryIdentity = $userRecord.Identities | Where-Object {$_.IdentityProviderName -eq "Active Directory"} | Select-Object -ExpandProperty Claims if($null -ne $activeDirectoryIdentity) { $user.AD_Upn = (($activeDirectoryIdentity | ForEach-Object {"$($_.upn.Value)"}) -Join "|") $user.AD_Sam = (($activeDirectoryIdentity | ForEach-Object {"$($_.sam.Value)"}) -Join "|") $user.AD_Email = (($activeDirectoryIdentity | ForEach-Object {"$($_.email.Value)"}) -Join "|") } } if($includeAzureActiveDirectoryDetails -eq $True) { $azureAdIdentity = $userRecord.Identities | Where-Object {$_.IdentityProviderName -eq "Azure AD"} | Select-Object -ExpandProperty Claims if($null -ne $azureAdIdentity) { $user.AAD_Dn = (($azureAdIdentity | ForEach-Object {"$($_.dn.Value)"}) -Join "|") $user.AAD_Email = (($azureAdIdentity | ForEach-Object {"$($_.email.Value)"}) -Join "|") } } } $users+=$user } if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $users | Export-Csv -Path $csvExportPath -NoTypeInformation } $users | Format-Table ```
PowerShell (Octopus.Client) ```powershell $ErrorActionPreference = "Stop"; # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" # Optional: include user role details? $includeUserRoles = $true # Optional: include non-active users in output $includeNonActiveUsers = $False # Optional: include AD details $includeActiveDirectoryDetails = $False # Optional: include AAD details $includeAzureActiveDirectoryDetails = $True # Optional: set a path to export to csv $csvExportPath = "path:\to\users.csv" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get users $users = $repository.Users.GetAll() $usersList = @() # Check to see if we're filtering out inactive if ($includeNonActiveUsers -eq $true) { # Filter out inactive users Write-Host "Filtering users who aren't active from results" $users = $users | Where-Object {$_.IsActive -eq $True} } # Loop through users foreach ($user in $users) { # Populate user details $userDetails = [ordered]@{ Id = $user.Id Username = $user.Username DisplayName = $user.DisplayName IsActive = $user.IsActive IsService = $user.IsService EmailAddress = $user.EmailAddress } # Check to see if we're including user roles if ($includeUserRoles -eq $true) { $userDetails.Add("ScopedUserRoles", "") # Get users teams $userTeamNames = $repository.UserTeams.Get($user) # Loop through the users teams foreach ($teamName in $userTeamNames) { # Get the team $team = $repository.Teams.Get($team.Id) foreach ($role in $repository.Teams.GetScopedUserRoles($team)) { $userDetails["ScopedUserRoles"] += "$(($repository.UserRoles.Get($role.UserRoleId).Name)) ($(($repository.Spaces.Get($role.SpaceId)).Name))|" } } } if ($includeActiveDirectoryDetails -eq $true) { # Get the identity provider object $activeDirectoryIdentity = $user.Identities | Where-Object {$_.IdentityProviderName -eq "Active Directory"} if ($null -ne $activeDirectoryIdentity) { $userDetails.Add("AD_Upn", (($activeDirectoryIdentity.Claims | ForEach-Object {"$($_.upn.Value)"}) -Join "|")) $userDetails.Add("AD_Sam", (($activeDirectoryIdentity.Claims | ForEach-Object {"$($_.sam.Value)"}) -Join "|")) $userDetails.Add("AD_Email", (($activeDirectoryIdentity.Claims | ForEach-Object {"$($_.email.Value)"}) -Join "|")) } } if ($includeAzureActiveDirectoryDetails -eq $true) { $azureAdIdentity = $user.Identities | Where-Object {$_.IdentityProviderName -eq "Azure AD"} if ($null -ne $azureAdIdentity) { $userDetails.Add("AAD_Dn", (($azureAdIdentity.Claims | ForEach-Object {"$($_.dn.Value)"}) -Join "|")) $userDetails.Add("AAD_Email", (($azureAdIdentity.Claims | ForEach-Object {"$($_.email.Value)"}) -Join "|")) } } $usersList += $userDetails } # Write header $header = $usersList.Keys | Select-Object -Unique Set-Content -Path $csvExportPath -Value ($header -join ",") foreach ($user in $usersList) { Add-Content -Path $csvExportPath -Value ($user.Values -join ",") } $usersList | Format-Table ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; class UserDetails { // Define private variables public string Id { get; set; } public string Username { get; set; } public string DisplayName { get; set; } public bool IsActive { get; set; } public bool IsService { get; set; } public string EmailAddress { get; set; } public string ScopedUserRoles { get;set; } public string AD_Upn { get; set; } public string AD_Sam { get; set; } public string AD_Email { get; set; } public string AAD_Dn { get; set; } public string AAD_Email { get; set; } } // If using .net Core, be sure to add the NuGet package of System.Security.Permissions var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string csvExportPath = "path:\\to\\users.csv"; bool includeUserRoles = true; bool includeActiveDirectoryDetails = false; bool includeAzureActiveDirectoryDetails = true; bool includeInactiveUsers = false; System.Collections.Generic.List usersList = new System.Collections.Generic.List(); // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get all users var users = repository.Users.FindAll(); // Loop through users if (!includeInactiveUsers) users = users.Where(u => u.IsActive == true).ToList(); foreach (var user in users) { // Get basic details UserDetails userDetails = new UserDetails(); userDetails.Id = user.Id; userDetails.Username = user.Username; userDetails.DisplayName = user.DisplayName; userDetails.IsActive = user.IsActive; userDetails.IsService = user.IsService; userDetails.EmailAddress = user.EmailAddress; // Check to see if userroles are included if (includeUserRoles) { var userTeamNames = repository.UserTeams.Get(user); foreach (var teamName in userTeamNames) { var team = repository.Teams.Get(teamName.Id); foreach (var role in repository.Teams.GetScopedUserRoles(team)) { userDetails.ScopedUserRoles += string.Format("{0} ({1})|", (repository.UserRoles.Get(role.UserRoleId)).Name, (repository.Spaces.Get(role.SpaceId))); } } } if(includeActiveDirectoryDetails) { var activeDirectoryDetails = user.Identities.FirstOrDefault(i => i.IdentityProviderName == "Active Directory"); if (null != activeDirectoryDetails) { userDetails.AD_Upn = activeDirectoryDetails.Claims["upn"].Value; userDetails.AD_Sam = activeDirectoryDetails.Claims["sam"].Value; userDetails.AD_Email = activeDirectoryDetails.Claims["email"].Value; } } if (includeAzureActiveDirectoryDetails) { var azureActiveDirectoryDetails = user.Identities.FirstOrDefault(i => i.IdentityProviderName == "Azure AD"); if (null != azureActiveDirectoryDetails) { userDetails.AAD_Dn = azureActiveDirectoryDetails.Claims["dn"].Value; userDetails.AAD_Email = azureActiveDirectoryDetails.Claims["email"].Value; } } usersList.Add(userDetails); } Console.WriteLine(string.Format("Found {0} results", usersList.Count.ToString())); if (usersList.Count > 0) { foreach (var result in usersList) { System.Collections.Generic.List row = new System.Collections.Generic.List(); System.Collections.Generic.List header = new System.Collections.Generic.List(); var isFirstRow = variableTracking.IndexOf(result) == 0; var properties = result.GetType().GetProperties(); foreach (var property in properties) { Console.WriteLine(string.Format("{0}: {1}", property.Name, property.GetValue(result))); if (isFirstRow) { header.Add(property.Name); } row.Add((property.GetValue(result) == null ? string.Empty : property.GetValue(result).ToString())); } if (!string.IsNullOrWhiteSpace(csvExportPath)) { using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(csvExportPath, true)) { if (isFirstRow) { // Write header csvFile.WriteLine(string.Join(",", header.ToArray())); } csvFile.WriteLine(string.Join(",", row.ToArray())); } } } } ```
Python3 ```python import json import requests from requests.api import get, head import csv def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} include_user_roles = True include_non_active_users = False include_active_directory_details = False include_azure_active_directory = True csv_export_path = "path:\\to\\users.csv" # Get users uri = '{0}/api/users'.format(octopus_server_uri) users = get_octopus_resource(uri, headers) users_list = [] # Loop through users for user in users: if include_non_active_users != True and user['IsActive'] == False: continue user_details = { 'Id': user['Id'], 'Username': user['Username'], 'DisplayName': user['DisplayName'], 'IsActive': user['IsActive'], 'IsService': user['IsService'], 'EmailAddress': user['EmailAddress'] } if include_user_roles: # Get users teams uri = '{0}/api/users/{1}/teams'.format(octopus_server_uri, user['Id']) user_team_names = get_octopus_resource(uri, headers) # Loop through teams for team_name in user_team_names: uri = '{0}/api/teams/{1}'.format(octopus_server_uri, team_name['Id']) team = get_octopus_resource(uri, headers) # Get scoped user roles uri = '{0}/api/teams/{1}/ScopedUserRoles'.format(octopus_server_uri, team['Id']) scoped_user_roles = get_octopus_resource(uri, headers) user_details['ScopedUserRoles'] = '' # Loop through roles for role in scoped_user_roles: if role['SpaceId'] == None: role['SpaceId'] = 'Spaces-1' uri = '{0}/api/spaces/{1}'.format(octopus_server_uri, role['SpaceId']) space = get_octopus_resource(uri, headers) uri = '{0}/api/userroles/{1}'.format(octopus_server_uri, role['UserRoleId']) user_role = get_octopus_resource(uri, headers) user_details['ScopedUserRoles'] += '{0} ({1})|'.format(user_role['Name'], space['Name']) if include_active_directory_details: active_directory_identity = next((x for x in user['Identities'] if x['IdentityProviderName'] == 'Active Directory'), None) if active_directory_identity != None: user_details['AD_Upn'] = active_directory_identity['Claims']['upn']['Value'] user_details['AD_Sam'] = active_directory_identity['Claims']['sam']['Value'] user_details['AD_Email'] = active_directory_identity['Claims']['sam']['Value'] if include_azure_active_directory: azure_ad_identity = next((x for x in user['Identities'] if x['IdentityProviderName'] == 'Azure AD'), None) if azure_ad_identity != None: user_details['AAD_Dn'] = azure_ad_identity['Claims']['dn']['Value'] user_details['AAD_Email'] = azure_ad_identity['Claims']['email']['Value'] print(user_details) users_list.append(user_details) if csv_export_path: with open(csv_export_path, mode='w') as csv_file: fieldnames = ['Id', 'Username', 'DisplayName', 'IsActive', 'IsService', 'EmailAddress', 'ScopedUserRoles', 'AD_Upn', 'AD_Sam', 'AD_Email', 'AAD_Dn', 'AAD_Email'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() for user in users_list: writer.writerow(user) ```
Go ```go package main import ( "bufio" "fmt" "log" "net/url" "os" "reflect" "strconv" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type UserDetails struct { Id string Username string DisplayName string IsActive string IsService string EmailAddress string ScopedUserRoles string AD_Upn string AD_Sam string AD_Email string AAD_Dn string AAD_Email string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" csvExportPath := "path:\\to\\users.csv" includeUserRoles := true includeActiveDirectoryDetails := false includeAzureActiveDirectoryDetails := true includeInactiveUsers := false usersList := []UserDetails{} // Create client object client := octopusAuth(apiURL, APIKey, "") // Get all users users, err := client.Users.GetAll() if err != nil { log.Println(err) } // Loop through users for _, user := range users { if !includeInactiveUsers && !user.IsActive { continue } // record user information userDetails := UserDetails{} userDetails.Id = user.ID userDetails.Username = user.Username userDetails.DisplayName = user.DisplayName userDetails.IsActive = strconv.FormatBool(user.IsActive) userDetails.IsService = strconv.FormatBool(user.IsService) userDetails.EmailAddress = user.EmailAddress if includeUserRoles { userTeamNames, err := client.Users.GetTeams(user) if err != nil { log.Println(err) } for _, userTeamName := range *userTeamNames { team, err := client.Teams.GetByID(userTeamName.ID) if err != nil { log.Println(err) } roles, err := client.Teams.GetScopedUserRoles(*team, octopusdeploy.SkipTakeQuery{Skip: 0, Take: 1000}) for _, role := range roles.Items { if role.SpaceID == "" { role.SpaceID = "Spaces-1" } space := GetSpace(apiURL, APIKey, role.SpaceID) userRole, err := client.UserRoles.GetByID(role.UserRoleID) if err != nil { log.Println(err) } userDetails.ScopedUserRoles += userRole.Name + " (" + space.Name + ")|" } } } for _, provider := range user.Identities { if provider.IdentityProviderName == "Active Directory" && includeActiveDirectoryDetails { userDetails.AD_Upn += provider.Claims["upn"].Value userDetails.AD_Sam += provider.Claims["sam"].Value userDetails.AD_Email += provider.Claims["email"].Value } if provider.IdentityProviderName == "Azure AD" && includeAzureActiveDirectoryDetails { userDetails.AAD_Dn += provider.Claims["dn"].Value userDetails.AAD_Email += provider.Claims["email"].Value } } usersList = append(usersList, userDetails) } if len(usersList) > 0 { fmt.Printf("Found %[1]s results \n", strconv.Itoa(len(usersList))) for i := 0; i < len(usersList); i++ { row := []string{} header := []string{} isFirstRow := false if i == 0 { isFirstRow = true } e := reflect.ValueOf(&usersList[i]).Elem() for j := 0; j < e.NumField(); j++ { if isFirstRow { header = append(header, e.Type().Field(j).Name) } row = append(row, e.Field(j).Interface().(string)) } if csvExportPath != "" { file, err := os.OpenFile(csvExportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) if isFirstRow { dataWriter.WriteString(strings.Join(header, ",") + "\n") } dataWriter.WriteString(strings.Join(row, ",") + "\n") dataWriter.Flush() file.Close() } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceId string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") // Get specific space object space, err := client.Spaces.GetByID(spaceId) if err != nil { log.Println(err) } else { fmt.Println("Retrieved space " + space.Name) } return space } ```
# List users with editing roles Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/list-users-with-editing-roles.md This script will list all users in an Octopus instance that have user roles (permissions) containing the words Edit, Create or Delete. ## Usage Provide values for: - Octopus URL - Octopus API Key - (Optional) path to export the results to a csv file ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = 'Stop'; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $csvExportPath = "" function Invoke-PagedOctoGet($uriFragment) { $items = @() $response = $null do { $uri = if ($response) { $octopusURL + $response.Links.'Page.Next' } else { "$octopusURL/$uriFragment" } $response = Invoke-RestMethod -Method Get -Uri $uri -Headers @{ "X-Octopus-ApiKey" = $octopusAPIKey } $items += $response.Items } while ($response.Links.'Page.Next') $items } $users = Invoke-PagedOctoGet "api/users" $usersWithEditPermissions = @() foreach ($user in $users) { $permissions = (Invoke-RestMethod ` -Uri "$octopusURL/api/users/$($user.Id)/permissions" ` -Headers @{ "X-Octopus-ApiKey" = $octopusAPIKey }).SpacePermissions.PSObject.Members ` | Where-Object MemberType -eq "NoteProperty" $editPermissionsForUser = @() foreach ($name in $permissions.Name) { if (($name -match "Edit") -or ($name -match "Create") -or ($name -match "Delete")) { $editPermissionsForUser += $name } } if ($editPermissionsForUser) { $usersWithEditPermissions += [PSCustomObject] @{ Id = $user.Id EmailAddress = $user.EmailAddress Username = $user.Username DisplayName = $user.DisplayName IsActive = $user.IsActive IsService = $user.IsService Permissions = ($editPermissionsForUser -join ",") } } } if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $usersWithEditPermissions | Export-Csv -Path $csvExportPath -NoTypeInformation } $usersWithEditPermissions | Format-Table ```
PowerShell (Octopus.Client) ```powershell $ErrorActionPreference = "Stop"; # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $csvExportPath = "path:\to\edit_permissions.csv" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get users $users = $repository.Users.GetAll() $usersList = @() # Loop through users foreach ($user in $users) { $userPermissions = $repository.UserPermissions.Get($user) $editPermissions = @() foreach ($spacePermission in $userPermissions.SpacePermissions) { foreach ($permissionName in $spacePermission.Keys) { if ($permissionName.ToString().ToLower().Contains("create") -or $permissionName.ToString().ToLower().Contains("delete") -or $permissionName.ToString().ToLower().Contains("edit")) { $editPermissions += $permissionName.ToString() } } } if ($null -ne $editPermissions -and $editPermissions.Count -gt 0) { $usersList += @{ Id = $user.Id EmailAddress = $user.EmailAddress Username = $user.Username DisplayName = $user.DisplayName IsActive = $user.IsActive IsService = $user.IsService Permissions = ($editPermissions -join "| ") } } } if (![string]::IsNullOrWhiteSpace($csvExportPath)) { # Write header $header = $usersList.Keys | Select-Object -Unique Set-Content -Path $csvExportPath -Value ($header -join ",") foreach ($user in $usersList) { Add-Content -Path $csvExportPath -Value ($user.Values -join ",") } } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; class UserDetails { // Define private variables public string Id { get; set; } public string Username { get; set; } public string DisplayName { get; set; } public bool IsActive { get; set; } public bool IsService { get; set; } public string EmailAddress { get; set; } public string Permissions { get;set; } } var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string csvExportPath = "path:\\to\\edit_permissions.csv"; System.Collections.Generic.List usersList = new System.Collections.Generic.List(); // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get all users var users = repository.Users.FindAll(); // Loop through users foreach (var user in users) { System.Collections.Generic.List editPermissions = new System.Collections.Generic.List(); var userPermissions = repository.UserPermissions.Get(user); // Loop through space permissions foreach (var spacePermission in userPermissions.SpacePermissions) { if (spacePermission.Key.ToString().ToLower().Contains("create") || spacePermission.Key.ToString().ToLower().Contains("delete") || spacePermission.Key.ToString().ToLower().Contains("edit")) { editPermissions.Add(spacePermission.Key.ToString()); } } if (editPermissions.Count > 0) { // Get basic details UserDetails userDetails = new UserDetails(); userDetails.Id = user.Id; userDetails.Username = user.Username; userDetails.DisplayName = user.DisplayName; userDetails.IsActive = user.IsActive; userDetails.IsService = user.IsService; userDetails.EmailAddress = user.EmailAddress; userDetails.Permissions = (String.Join("|", editPermissions.ToArray())); usersList.Add(userDetails); } } Console.WriteLine(string.Format("Found {0} results", usersList.Count.ToString())); if (usersList.Count > 0) { foreach (var result in usersList) { System.Collections.Generic.List row = new System.Collections.Generic.List(); System.Collections.Generic.List header = new System.Collections.Generic.List(); var isFirstRow = variableTracking.IndexOf(result) == 0; var properties = result.GetType().GetProperties(); foreach (var property in properties) { Console.WriteLine(string.Format("{0}: {1}", property.Name, property.GetValue(result))); if (isFirstRow) { header.Add(property.Name); } row.Add((property.GetValue(result) == null ? string.Empty : property.GetValue(result).ToString())); } if (!string.IsNullOrWhiteSpace(csvExportPath)) { using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(csvExportPath, true)) { if (isFirstRow) { // Write header csvFile.WriteLine(string.Join(",", header.ToArray())); } csvFile.WriteLine(string.Join(",", row.ToArray())); } } } } ```
Python3 ```python import json import requests from requests.api import get, head import csv def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} csv_export_path = "path:\\to\\edit_permissions.csv" # Get users uri = '{0}/api/users'.format(octopus_server_uri) users = get_octopus_resource(uri, headers) users_list = [] # Loop through users for user in users: uri = '{0}/api/users/{1}/permissions'.format(octopus_server_uri, user['Id']) user_permissions = get_octopus_resource(uri, headers) edit_permission = [] # Loop through space permissions for space_permission in user_permissions['SpacePermissions']: if "Create" in space_permission or "Delete" in space_permission or "Edit" in space_permission: edit_permission.append(space_permission) if len(edit_permission) > 0: users_list.append({ 'Id': user['Id'], 'EmailAddress': user['EmailAddress'], 'Username': user['Username'], 'DisplayName': user['DisplayName'], 'IsActive': user['IsActive'], 'IsService': user['IsService'], 'Permissions': '|'.join(edit_permission) }) if csv_export_path: with open(csv_export_path, mode='w') as csv_file: fieldnames = ['Id', 'EmailAddress', 'Username', 'DisplayName', 'IsActive', 'IsService', 'Permissions'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() for user in users_list: writer.writerow(user) ```
Go ```go package main import ( "bufio" "fmt" "log" "net/url" "os" "reflect" "strconv" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type UserDetails struct { Id string Username string DisplayName string IsActive string IsService string EmailAddress string Permissions string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" csvExportPath := "path:\\to\\edit_permissions.csv" usersList := []UserDetails{} // Create client object client := octopusAuth(apiURL, APIKey, "") // Get all users users, err := client.Users.GetAll() if err != nil { log.Println(err) } // Loop through users for _, user := range users { // Get user permissions userPermissions, err := client.Users.GetPermissions(user) editPermissions := []string{} if err != nil { log.Println(err) } // Loop through the permissions v := reflect.ValueOf(userPermissions.SpacePermissions) for i := 0; i < v.NumField(); i++ { if strings.Contains(v.Type().Field(i).Name, "Create") || strings.Contains(v.Type().Field(i).Name, "Delete") || strings.Contains(v.Type().Field(i).Name, "Edit") { permissionRestrictions := v.Field(i).Interface().([]octopusdeploy.UserPermissionRestriction) if len(permissionRestrictions) > 0 { editPermissions = append(editPermissions, v.Type().Field(i).Name) } } } if len(editPermissions) > 0 { // record user information userDetails := UserDetails{} userDetails.Id = user.ID userDetails.Username = user.Username userDetails.DisplayName = user.DisplayName userDetails.IsActive = strconv.FormatBool(user.IsActive) userDetails.IsService = strconv.FormatBool(user.IsService) userDetails.EmailAddress = user.EmailAddress userDetails.Permissions = strings.Join(editPermissions, "|") usersList = append(usersList, userDetails) } } if len(usersList) > 0 { fmt.Printf("Found %[1]s results \n", strconv.Itoa(len(usersList))) for i := 0; i < len(usersList); i++ { row := []string{} header := []string{} isFirstRow := false if i == 0 { isFirstRow = true } e := reflect.ValueOf(&usersList[i]).Elem() for j := 0; j < e.NumField(); j++ { if isFirstRow { header = append(header, e.Type().Field(j).Name) } row = append(row, e.Field(j).Interface().(string)) } if csvExportPath != "" { file, err := os.OpenFile(csvExportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) if isFirstRow { dataWriter.WriteString(strings.Join(header, ",") + "\n") } dataWriter.WriteString(strings.Join(row, ",") + "\n") dataWriter.Flush() file.Close() } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceId string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") // Get specific space object space, err := client.Spaces.GetByID(spaceId) if err != nil { log.Println(err) } else { fmt.Println("Retrieved space " + space.Name) } return space } ```
# List users with role Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/list-users-with-role.md This script will list all users with a given role by team. You can also filter the list by specifying a space name. ## Usage Provide values for: - Octopus URL - Octopus API Key - User Role Name - (Optional) Space Name ## Example output ### All spaces with role name `Project Deployer` ``` Team: Build Servers Space: Default TeamCity Build Server Team: Can Deploy But Not Download Packages Space: Default PackageTest Team: Developer Lower Environment Space: Default Paul Oliver the Developer Team: Devs Space: Default Team: Quick Test Space: Default Ryan Rousseau Team: ShawnTest Space: AzureDevOps Adam Close External security groups: TestDomain\SpecialGroup ``` ### Space name AzureDevOps ``` Team: ShawnTest Space: AzureDevOps Adam Close External security groups: TestDomain\SpecialGroup ``` ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusBaseURL = "https://your-octopus-url/api" $octopusAPIKey = "API-YOUR-KEY" $headers = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $roleName = "Project Deployer" $spaceName = "" # Leave blank if you're using an older version of Octopus or you want to search all spaces # Get the space id $spaceId = ((Invoke-RestMethod -Method Get -Uri "$octopusBaseURL/spaces/all" -Headers $headers -ErrorVariable octoError) | Where-Object {$_.Name -eq $spaceName}).Id # Get reference to role $role = (Invoke-RestMethod -Method Get -Uri "$octopusBaseURL/userroles/all" -Headers $headers -ErrorVariable octoError) | Where-Object {$_.Name -eq $roleName} # Get list of teams $teams = (Invoke-RestMethod -Method Get -Uri "$octopusBaseURL/teams/all" -Headers $headers -ErrorVariable octoError) # Loop through teams foreach ($team in $teams) { # Get the scoped user role $scopedUserRoles = Invoke-RestMethod -Method Get -Uri ("$octopusBaseURL/teams/$($team.Id)/scopeduserroles") -Headers $headers -ErrorVariable octoError # Loop through the scoped user roles foreach ($scopedUserRole in $scopedUserRoles) { # Check to see if space was specified if (![string]::IsNullOrEmpty($spaceId)) { # Filter items by space $scopedUserRole.Items = $scopedUserRole.Items | Where-Object {$_.SpaceId -eq $spaceId} } # Check to see if the team has the role if ($null -ne ($scopedUserRole.Items | Where-Object {$_.UserRoleId -eq $role.Id})) { # Display team name Write-Output "Team: $($team.Name)" # check space id if ([string]::IsNullOrEmpty($spaceName)) { # Get the space id $teamSpaceId = ($scopedUserRole.Items | Where-Object {$_.UserRoleId -eq $role.Id}).SpaceId # Get the space name $teamSpaceName = (Invoke-RestMethod -Method Get -Uri "$octopusBaseURL/spaces/$teamSpaceId" -Headers $headers -ErrorVariable octoError).Name # Display the space name Write-Output "Space: $teamSpaceName" } else { # Display the space name Write-Output "Space: $spaceName" } Write-Output "Users:" # Loop through members foreach ($userId in $team.MemberUserIds) { # Get user object $user = Invoke-RestMethod -Method Get -Uri ("$octopusBaseURL/users/$userId") -Headers $headers -ErrorVariable octoError # Display user Write-Output "$($user.DisplayName)" } # Check for external security groups if (($null -ne $team.ExternalSecurityGroups) -and ($team.ExternalSecurityGroups.Count -gt 0)) { # External groups Write-Output "External security groups:" # Loop through groups foreach ($group in $team.ExternalSecurityGroups) { # Display group Write-Output "$($group.Id)" } } } } } ```
PowerShell (Octopus.Client) ```powershell # Define working variables $octopusBaseURL = "https://your-octopus-url/api" $octopusAPIKey = "API-YOUR-KEY" # Load the Octopus.Client assembly from where you have it located. Add-type -Path "C:\Octopus.Client\Octopus.Client.dll" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusBaseURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $roleName = "Project Deployer" $spaceName = "" try { $space = $repository.Spaces.FindByName($spaceName) # Get specific role $role = $repository.UserRoles.FindByName($roleName) # Get all the teams $teams = $repository.Teams.GetAll() # Loop through the teams foreach ($team in $teams) { # Get all associated user roles $scopedUserRoles = $repository.Teams.GetScopedUserRoles($team) # Check to see if there was a space defined if (![string]::IsNullOrEmpty($spaceName)) { # Filter on space $scopedUserRoles = $scopedUserRoles | Where-Object {$_.SpaceId -eq $space.Id} } # Loop through the scoped user roles foreach ($scopedUserRole in $scopedUserRoles) { # Check role id if ($scopedUserRole.UserRoleId -eq $role.Id) { # Display the team name Write-Output "Team: $($team.Name)" # Display the space name Write-Output "Space: $($repository.Spaces.Get($scopedUserRole.SpaceId).Name)" Write-Output "Users:" # Loop through the members foreach ($member in $team.MemberUserIds) { # Get the user account $user = $repository.Users.GetAll() | Where-Object {$_.Id -eq $member} # Display Write-Output "$($user.DisplayName)" } # Check to see if there were external groups if (($null -ne $team.ExternalSecurityGroups) -and ($team.ExternalSecurityGroups.Count -gt 0)) { Write-Output "External security groups:" # Loop through groups foreach ($group in $team.ExternalSecurityGroups) { # Display group Write-Output "$($group.Id)" } } } } } } catch { Write-Output "An error occurred: $($_.Exception.Message)" } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions var octopusBaseURL = "https://your-octopus-url/api"; var octopusAPIKey = "API-YOUR-KEY"; var endpoint = new OctopusServerEndpoint(octopusBaseURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); string roleName = "Project Deployer"; var spaceName = ""; try { // Get space id var space = repository.Spaces.FindByName(spaceName); // Get reference to the role var role = repository.UserRoles.FindByName(roleName); // Get all teams to search var teams = repository.Teams.FindAll(); // Loop through the teams foreach (var team in teams) { // Retrieve scoped user roles var scopedUserRoles = repository.Teams.GetScopedUserRoles(team); // Check to see if there was a space name specified if (!string.IsNullOrEmpty(spaceName)) { // filter returned scopedUserRoles scopedUserRoles = scopedUserRoles.Where(x => x.SpaceId == space.Id).ToList(); } // Loop through returned roles foreach (var scopedUserRole in scopedUserRoles) { // Check to see if it's the role we're looking for if (scopedUserRole.UserRoleId == role.Id) { // Output team name Console.WriteLine(string.Format("Team: {0}", team.Name)); // Output space name Console.WriteLine(string.Format("Space: {0}", repository.Spaces.Get(scopedUserRole.SpaceId).Name)); Console.WriteLine("Users:"); // Loop through team members foreach (var member in team.MemberUserIds) { // Get the user object var user = repository.Users.Get(member); // Display the user name Console.WriteLine(user.DisplayName); } // Check for external groups if ((team.ExternalSecurityGroups != null) && (team.ExternalSecurityGroups.Count > 0)) { // Console.WriteLine("External security groups:"); // Iterate through external security groups foreach (var group in team.ExternalSecurityGroups) { Console.WriteLine(group.Id); } } } } } } catch (Exception ex) { Console.WriteLine(ex.Message); } ```
Python3 ```python import json import requests from requests.api import get, head import csv def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' role_name = 'Project deployer' space_name = 'Default' headers = {'X-Octopus-ApiKey': octopus_api_key} # Get users uri = '{0}/api/users'.format(octopus_server_uri) users = get_octopus_resource(uri, headers) # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get teams uri = '{0}/api/teams'.format(octopus_server_uri) teams = get_octopus_resource(uri, headers) # Get the role in question uri = '{0}/api/userroles'.format(octopus_server_uri) user_roles = get_octopus_resource(uri, headers) user_role = next((x for x in user_roles if x['Name'] == role_name), None) # Loop through teams for team in teams: # Get the scoped user roles uri = '{0}/api/teams/{1}/scopeduserroles'.format(octopus_server_uri, team['Id']) scoped_user_roles = get_octopus_resource(uri, headers) # Get the role that matches scoped_user_role = next((r for r in scoped_user_roles if r['UserRoleId'] == user_role['Id']), None) # Check to see if it has the role if scoped_user_role != None: print ('Team: {0}'.format(team['Name'])) print('Users:') # Display the team members for user_id in team['MemberUserIds']: uri = '{0}/api/users/{1}'.format(octopus_server_uri, user_id) user = get_octopus_resource(uri, headers) print(user['DisplayName']) if team['ExternalSecurityGroups'] != None and len(team['ExternalSecurityGroups']) > 0: for group in team['ExternalSecurityGroups']: print(group['Id']) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" //spaceName := "Default" userRoleName := "Project deployer" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get all teams teams, err := client.Teams.GetAll() if err != nil { log.Println(err) } // Get user role userRole := GetUserRoleByName(client, userRoleName) // Loop through teams for _, team := range teams { // Get scoped user roles scopedUserRoles, err := client.Teams.GetScopedUserRoles(*team, octopusdeploy.SkipTakeQuery{Skip: 0, Take: 1000}) if err != nil { log.Println(err) } scopedUserRole := GetUserRole(scopedUserRoles.Items, userRole) if scopedUserRole != nil { fmt.Printf("Team: %[1]s \n", team.Name) fmt.Println("Users:") for _, userId := range team.MemberUserIDs { user, err := client.Users.GetByID(userId) if err != nil { log.Println(err) } fmt.Println(user.DisplayName) } if team.ExternalSecurityGroups != nil && len(team.ExternalSecurityGroups) > 0 { for _, group := range team.ExternalSecurityGroups { fmt.Println(group.DisplayIDAndName) } } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceId string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") // Get specific space object space, err := client.Spaces.GetByID(spaceId) if err != nil { log.Println(err) } else { fmt.Println("Retrieved space " + space.Name) } return space } func GetUserRoleByName(client *octopusdeploy.Client, roleName string) *octopusdeploy.UserRole { // Get all user roles userRoles, err := client.UserRoles.GetAll() if err != nil { log.Println(err) } // Loop through roles for _, role := range userRoles { if role.Name == roleName { return role } } return nil } func GetUserRole(roles []*octopusdeploy.ScopedUserRole, role *octopusdeploy.UserRole) *octopusdeploy.ScopedUserRole { for _, v := range roles { if v.UserRoleID == role.ID { return v } } return nil } ```
# Remove a project from team Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/remove-project-from-team.md This script demonstrates how to programmatically remove a project from a team. ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the space to work with - Name of the team - Name of the project ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $projectName = "MyProject" $spaceName = "default" $teamName = "MyTeam" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get team $team = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/teams" -Headers $header).Items | Where-Object {$_.Name -eq $teamName} # Get scoped user roles $scopedUserRoles = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/teams/$($team.Id)/scopeduserroles" -Headers $header).Items | Where-Object {$_.ProjectIds -contains $project.Id} # Loop through results and remove project Id foreach ($scopedUserRole in $scopedUserRoles) { # Filter out project $scopedUserRole.ProjectIds = ,($scopedUserRole.ProjectIds | Where-Object {$_ -notcontains $project.Id}) # Yes, the , is supposed to be there # Update scoped user role Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/scopeduserroles/$($scopedUserRole.Id)" -Body ($scopedUserRole | ConvertTo-Json -Depth 10) -Headers $header } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $teamName = "MyTeam" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get team $team = $repositoryForSpace.Teams.FindByName($teamName) # Get scoped user roles $scopedUserRoles = $repositoryForSpace.ScopedUserRoles.FindMany({param($p) $p.ProjectIds -contains $project.Id -and $p.TeamId -eq $team.Id}) # Loop through scoped user roles and remove where present foreach ($scopedUserRole in $scopedUserRoles) { $scopedUserRole.ProjectIds = [Octopus.Client.Model.ReferenceCollection]($scopedUserRole.ProjectIds | Where-Object {$_ -notcontains $project.Id}) $repositoryForSpace.ScopedUserRoles.Modify($scopedUserRole) } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; string projectName = "MyProject"; string teamName = "MyTeam"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get team var team = repositoryForSpace.Teams.FindByName(teamName); // Get scoped user roles var scopedUserRoles = repository.Teams.GetScopedUserRoles(team); // Loop through scoped user roles and remove project reference foreach (var scopedUserRole in scopedUserRoles) { scopedUserRole.ProjectIds = new Octopus.Client.Model.ReferenceCollection(scopedUserRole.ProjectIds.Where(p => p != project.Id).ToArray()); repositoryForSpace.ScopedUserRoles.Modify(scopedUserRole); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head import csv def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = 'Default' project_name = "MyProject" team_name = "MyTeam" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get project uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) project = next((p for p in projects if p['Name'] == project_name), None) # Get team uri = '{0}/api/{1}/teams'.format(octopus_server_uri, space['Id']) teams = get_octopus_resource(uri, headers) team = next((t for t in teams if t['Name'] == team_name), None) # Get scoped user roles uri = '{0}/api/{1}/teams/{2}/scopeduserroles'.format(octopus_server_uri, space['Id'], team['Id']) scoped_user_roles = get_octopus_resource(uri, headers) for scoped_user_role in scoped_user_roles: if project['Id'] in scoped_user_role['ProjectIds']: scoped_user_role['ProjectIds'].remove(project['Id']) # Update the scoped user role print('Removing team {0} from project {1}'.format(team['Name'], project['Name'])) uri = '{0}/api/{1}/scopeduserroles/{2}'.format(octopus_server_uri, space['Id'], scoped_user_role['Id']) response = requests.put(uri, headers=headers, json=scoped_user_role) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" teamName := "MyTeam" // Get the space object space := GetSpace(apiURL, APIKey, spaceName) // Create client for space client := octopusAuth(apiURL, APIKey, space.ID) // Get team team := GetTeam(client, space, teamName, 0) // Get project project := GetProject(apiURL, APIKey, space, projectName) // Get the scoped user roles for the team scopedUserRoles, err := client.Teams.GetScopedUserRoles(*team, octopusdeploy.SkipTakeQuery{Skip: 0, Take: 1000}) if err != nil { log.Println(err) } // Loop through scoped user roles for _, scopedUserRole := range scopedUserRoles.Items { if arrayContains(scopedUserRole.ProjectIDs, project.ID) { // Rebuild slice without that Id fmt.Printf("Removing %[1]s from %[2]s \n", team.Name, project.Name) scopedUserRole.ProjectIDs = RemoveFromArray(scopedUserRole.ProjectIDs, project.ID) client.ScopedUserRoles.Update(scopedUserRole) } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetTeam(client *octopusdeploy.Client, space *octopusdeploy.Space, teamName string, skip int) *octopusdeploy.Team { // Create query teamsQuery := octopusdeploy.TeamsQuery{ PartialName: teamName, Spaces: []string{space.ID}, } // Query for team teams, err := client.Teams.Get(teamsQuery) if err != nil { log.Println(err) } if len(teams.Items) == teams.ItemsPerPage { // call again team := GetTeam(client, space, teamName, (skip + len(teams.Items))) if team != nil { return team } } else { // Loop through returned items for _, team := range teams.Items { if team.Name == teamName { return team } } } return nil } func arrayContains(s []string, str string) bool { for _, v := range s { if v == str { return true } } return false } func RemoveFromArray(items []string, item string) []string { newItems := []string{} for _, entry := range items { if entry != item { newItems = append(newItems, entry) } } return newItems } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } ```
# Swap AD group with LDAP group Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/swap-ad-domain-group-with-ldap-group.md This script demonstrates how to programmatically swap any Active Directory external group for a matching LDAP external group in each Octopus team. This can be useful when you are migrating from the Active Directory authentication provider to the LDAP provider. We also have a script that will [swap Active Directory login records with matching LDAP ones](/docs/octopus-rest-api/examples/users-and-teams/swap-users-ad-domain-to-ldap) for Octopus users. :::div{.hint} **Note:** Please note there are some things to consider before using this script: - Both the [Active Directory](/docs/security/authentication/active-directory/) and [LDAP](/docs/security/authentication/ldap) providers must be enabled for this script to work as it queries both providers. - Always ensure you test the script on a non-production server first, and have a production database backup. ::: ## Usage Provide values for: - Octopus URL - Octopus API Key - Name of the Active Directory domain to use to look up the groups to swap - WhatIf - A boolean value to toggle whether or not to perform the actual updates to teams in Octopus. - Remove old teams - A boolean value to toggle whether or not to remove the existing Active Directory groups from each team. ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop" $octopusURL = "https://your-octopus-url" # Replace with your instance URL $octopusAPIKey = "API-YOUR-KEY" # Replace with a service account API Key $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Script options # Provide the domain. This is needed to look up the group to ensure it's a valid AD Group we're working on. $AD_Domain = "YOUR_DOMAIN" # Set this to $False if you want the Script to perform the update on Octopus Teams. $WhatIf = $True # Set this to $True if you want the Script to remove old Active Directory teams once the LDAP group has been found and added. $RemoveOldTeams = $False # Limit how may teams are retrieved/updated. # Use these two variables to work through if you have hundreds of teams. $skipIndex = 0 $recordsToBringBack = 30 # Get teams Write-Host "Pulling teams starting at index $skipIndex and getting a max of $recordsToBringBack records back" $teamList = Invoke-RestMethod -Method GET -Uri "$OctopusUrl/api/teams?skip=$skipIndex&take=$recordsToBringBack" -Headers $header $teams = $teamList.Items $ldapRecordsToAdd = @() $activeDirectoryRecordsToRemove = @() $recordsUpdated = 0 foreach ($team in $teams) { try { Write-Host "Working on team: '$($team.Name)'$(if (![string]::IsNullOrWhiteSpace($team.SpaceId)) {" from Space '$($team.SpaceId)'"})" $teamExternalGroups = $team.ExternalSecurityGroups if ($teamExternalGroups.Count -eq 0) { Write-Verbose "Team: '$($team.Name)' doesn't have any external groups, skipping" continue } else { foreach ($externalSecurityGroup in $team.ExternalSecurityGroups) { $externalName = $externalSecurityGroup.DisplayName if ($null -eq $externalName) { continue } else { # Check if this external group is an AD group $ad_TeamNameToFind = "$AD_Domain\$externalName" $directoryServicesResults = Invoke-RestMethod -Method GET -Uri "$octopusURL/api/externalgroups/directoryServices?partialName=$([System.Web.HTTPUtility]::UrlEncode($ad_TeamNameToFind))" -Headers $header $matchFound = $False foreach ($adResult in $directoryServicesResults) { if ($adResult.DisplayName -eq $externalName -and $adResult.Id -eq $externalSecurityGroup.Id) { Write-Host "Found a matching team name in AD for '$($team.Name)' that matches the SID $($externalSecurityGroup.Id)." -ForegroundColor Green $matchFound = $true break; } } # Next, check to see if to find a matching group in LDAP if ($matchFound -eq $True) { $ldapTeamNameToFind = "$externalName" $ldapResults = Invoke-RestMethod -Method GET -Uri "$octopusURL/api/externalgroups/ldap?partialName=$([System.Web.HTTPUtility]::UrlEncode($ldapTeamNameToFind))" -Headers $header foreach ($ldapResult in $ldapResults) { if ($ldapResult.DisplayName -eq $externalName) { Write-Host "Found a matching team name in LDAP for '$($team.Name)'." -ForegroundColor Green $ldapMatchFound = $true break; } } $foundExistingMatch = $False if ($ldapMatchFound -eq $True) { # Does the Octopus team already have this LDAP Group? foreach ($group in $team.ExternalSecurityGroups) { if ($group.Id -eq $ldapResult.Id) { $foundExistingMatch = $true break } } if ($foundExistingMatch -eq $false) { $ldapRecordsToAdd += $ldapResult } else { Write-Host "The LDAP group already existed on team '$($team.Name)'." } if ($RemoveOldTeams -eq $True) { Write-Host "Existing AD Group with SID $($externalSecurityGroup.Id) in team '$($team.Name)' will be marked to be removed" $activeDirectoryRecordsToRemove += $adResult.Id } } } } } if ($ldapRecordsToAdd.Length -gt 0) { foreach ($teamToAdd in $ldapRecordsToAdd) { $team.ExternalSecurityGroups += $teamToAdd } } if ($RemoveOldTeams -eq $True -and $activeDirectoryRecordsToRemove.Length -gt 0) { $externalGroups = @() foreach ($group in $team.ExternalSecurityGroups) { if ($activeDirectoryRecordsToRemove -contains $group.Id) { Write-Verbose "Removing AD group with SID $($group.Id)" continue } else { $externalGroups += $group } } Write-Host "Filtered external groups from $($team.ExternalSecurityGroups.Length) to $($externalGroups.Length)" $team.ExternalSecurityGroups = $externalGroups } if ($ldapRecordsToAdd.Length -gt 0 -or ($RemoveOldTeams -eq $True -and $activeDirectoryRecordsToRemove.Length -gt 0)) { $TeamUpdateUri = "$OctopusUrl/api/teams/$($team.Id)" $TeamBody = $($team | ConvertTo-Json -Depth 10 -Compress) if ($WhatIf -eq $True) { Write-Host "WhatIf = True. Update for team '$($Team.Name)' would have been:" Write-Host "$($TeamBody)" } else { Write-Host "Updating team '$($Team.Name)' in Octopus Deploy" Invoke-RestMethod -Method PUT -Uri $TeamUpdateUri -Headers $header -Body $teamBody | Out-Null } $recordsUpdated += 1 } } } catch { Write-Error "An error occurred with Team: $($team.Name) - $($_.Exception.ToString())" } } Write-Host "Updated $recordsUpdated team(s)." ```
# Change users AD domain to LDAP Source: https://octopus.com/docs/octopus-rest-api/examples/users-and-teams/swap-users-ad-domain-to-ldap.md This script demonstrates how to programmatically swap an Octopus user's Active Directory login record for a matching LDAP one. This can be useful when you are migrating from the Active Directory authentication provider to the LDAP provider. We also have a script that will [swap Active Directory groups with matching LDAP groups](/docs/octopus-rest-api/examples/users-and-teams/swap-ad-domain-group-with-ldap-group) for Octopus teams. :::div{.hint} **Note:** Please note there are some things to consider before using this script: - The [LDAP authentication provider](/docs/security/authentication/ldap) must be enabled for this script to work as it queries for matching users in LDAP. - The script won't work if the LDAP server and the AD Server domains are different. For example migrating from `domain-one.local` to `domain-two.local`. - Always ensure you test the script on a non-production server first, and have a production database backup. ::: ## Usage Provide values for: - Octopus URL - Octopus API Key - Max number of records to update in the script execution. - Name of the domain to use to find a users existing Active Directory record to optionally remove, in the format `your-ad-domain.com`. - Name of the domain to use when searching LDAP for matching external user records in the format `your-ldap-domain.com`. *This is typically the same value as the Active Directory domain*. - LDAP username lookup - A boolean value to toggle whether or not to include the LDAP domain when matching the Active Directory username to the LDAP one. - WhatIf - A boolean value to toggle whether or not to perform the actual updates to users in Octopus. - Remove old Active Directory records - A boolean value to toggle whether or not to remove the existing active directory record from each user. ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop" $octopusURL = "https://your-octopus-url" # Replace with your instance URL $octopusAPIKey = "API-YOUR-KEY" # Replace with a service account API Key $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # The max number of records you want to update in this batch $maxRecordsToUpdate = 1 # Provide the domain. This is needed to find the user AD identity (to potentially remove) $AD_Domain = "your-ad-domain.com" # Provide the domain for LDAP. Typically this is the same as the AD_Domain value. $LDAP_Domain = "your-ldap-domain.com" # If set to $True -> the script will search for a matching user in LDAP using the format: username@$LDAP_Domain # If set to $False -> the script will search for a matching user in LDAP using the format: username $LDAP_UsernameLookup_IncludeDomain = $True # Set this to $False if you want the Script to perform the update on Octopus Users. $WhatIf = $True # Set this to $True if you want the Script to remove old Active Directory records once the LDAP user has been found and added. $RemoveActiveDirectoryRecords = $False $skipIndex = 0 $recordsToBringBack = 30 $recordsUpdated = 0 # Continue until we reach the end of the user list or until we go over the max records to update while ($True) { try { Write-Host "Pulling users starting at index $skipIndex and getting a max of $recordsToBringBack records back" $userList = Invoke-RestMethod -Method GET -Uri "$OctopusUrl/api/users?skip=$skipIndex&take=$recordsToBringBack" -Headers $header # Update to pull back the next batch of users $skipIndex = $skipIndex + $recordsToBringBack if ($userList.Items.Count -eq 0) { break; } foreach ($user in $userList.Items) { if ($user.IsService -eq $true -or $user.Identities.Count -eq 0) { # Skip Octopus Deploy Service Accounts or users not tied to an active directory account continue; } Write-Host "Checking to see if $($user.UserName) has an active directory account." $foundActiveDirectoryRecordForUser = $false for ($i = 0; $i -lt $user.Identities.Count; $i++) { if ($user.Identities[$i].IdentityProviderName -ne "Active Directory") { # We only care about active directory records. continue; } Write-Host "$($user.UserName) has an active directory account, pulling out the domain name." $claimList = $user.Identities[$i].Claims | Get-Member | Where-Object { $_.MemberType -eq "NoteProperty" } | Select-Object -Property "Name" foreach ($claimName in $claimList) { $nameValue = $claimName.Name $claim = $user.Identities[$i].Claims.$nameValue if ($claim.Value.ToLower().Contains($AD_Domain.ToLower())) { Write-Host "The claim $nameValue for $($user.UserName) has the value $($claim.Value) which matches $AD_Domain. Updating this account." $foundActiveDirectoryRecordForUser = $true break; } } if ($foundActiveDirectoryRecordForUser -eq $true) { break; } } if ($foundActiveDirectoryRecordForUser -eq $true) { # This user record potentially needs to be updated, clone the user object so we can manipulate it (and so we have the original) $userRecordToUpdate = $user | ConvertTo-Json -Depth 10 | ConvertFrom-Json if ($RemoveActiveDirectoryRecords -eq $True) { # Grab any records that are not active directory $filteredOldRecords = $user.Identities | Where-Object { $_.IdentityProviderName -ne "Active Directory" } if ($null -ne $filteredOldRecords) { $userRecordToUpdate.Identities = @($filteredOldRecords) } else { $userRecordToUpdate.Identities = @() } } # Let's attempt to find a matching LDAP account $userNameToLookUp = "$($userRecordToUpdate.Username)" if ($userRecordToUpdate.Username -like "*@*") { $userNameToLookUp = ($userRecordToUpdate.Username -Split "@")[0] } elseif ($userRecordToUpdate.Username -like "*`\*") { $userNameToLookUp = ($userRecordToUpdate.Username -Split "\\")[1] } $expectedMatch = "$userNameToLookUp" If ($LDAP_UsernameLookup_IncludeDomain -eq $True) { $expectedMatch = "$($userNameToLookUp)@$($LDAP_Domain)" } $ldapMatchFound = $False Write-Host "Looking up the LDAP account $userNameToLookup in Octopus Deploy" $ldapResults = Invoke-RestMethod -Method GET -Uri "$octopusURL/api/externalusers/ldap?partialName=$([System.Web.HTTPUtility]::UrlEncode($userNameToLookUp))" -Headers $header $LdapIdentity = $null # Search LDAP Identities foreach ($identity in $ldapResults.Identities) { if ($identity.IdentityProviderName -eq "LDAP") { $claimList = $identity.Claims | Get-Member | Where-Object { $_.MemberType -eq "NoteProperty" } | Select-Object -Property "Name" foreach ($claimName in $claimList) { $claimName = $claimName.Name $claim = $identity.Claims.$ClaimName if ($null -ne $claim.Value -and $claim.Value.ToLower() -eq $expectedMatch.ToLower() -and $claim.IsIdentifyingClaim -eq $true) { Write-Host "Found the user's LDAP record, add that to Octopus Deploy" $LdapIdentity = $identity $ldapMatchFound = $true break; } } if ($ldapMatchFound) { break; } } } $foundExistingUserLdapMatch = $False if ($ldapMatchFound -eq $True) { # Check existing user identities for a matching LDAP already being present. for ($i = 0; $i -lt $user.Identities.Count; $i++) { if ($user.Identities[$i].IdentityProviderName -ieq "LDAP") { $claimList = $user.Identities[$i].Claims | Get-Member | Where-Object { $_.MemberType -eq "NoteProperty" } | Select-Object -Property "Name" foreach ($claimName in $claimList) { $nameValue = $claimName.Name $claim = $user.Identities[$i].Claims.$nameValue if ($null -ne $claim.Value -and $claim.Value.ToLower().Contains($LDAP_Domain.ToLower())) { $foundExistingUserLdapMatch = $true break; } } } if ($foundExistingUserLdapMatch -eq $True) { break; } } if ($foundExistingUserLdapMatch -eq $false) { $userRecordToUpdate.Identities += $LdapIdentity } else { Write-Host "Ans LDAP identity already exists on user '$($user.Username)'." } } $removalAdUpdateRequired = $foundActiveDirectoryRecordForUser -eq $True -and $RemoveActiveDirectoryRecords -eq $True $newLdapUpdateRequired = $ldapMatchFound -eq $True -and $foundExistingUserLdapMatch -eq $false if ($removalAdUpdateRequired -eq $True -or $newLdapUpdateRequired) { $userUpdateUri = "$OctopusUrl/api/users/$($userRecordToUpdate.Id)" $UserBody = $($userRecordToUpdate | ConvertTo-Json -Depth 10 -Compress) if ($WhatIf -eq $True) { Write-Host "WhatIf = True. Update for user '$($userRecordToUpdate.Username)' would have been:" Write-Host "$($UserBody)" } else { Write-Host "Updating user '$($userRecordToUpdate.Username)' in Octopus Deploy" Invoke-RestMethod -Method PUT -Uri $userUpdateUri -Headers $header -Body $UserBody | Out-Null } $recordsUpdated += 1 } else { Write-Host "No update for user '$($userRecordToUpdate.Username)' is required, skipping." } if ($recordsUpdated -ge $maxRecordsToUpdate) { break } } } if ($recordsUpdated -ge $maxRecordsToUpdate) { Write-Host "Reached the maximum number of records to update, stopping" break } } catch { Write-Error "An error occurred with user: $($user.Username) - $($_.Exception.ToString())" break; } } ```
# Add variable set to a project Source: https://octopus.com/docs/octopus-rest-api/examples/variables/add-library-set-to-project.md This script demonstrates how to programmatically add a variable set to a project. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - Name of the variable set ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $projectName = "MyProject" $librarySetName = "MyLibrarySet" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get variable set $librarySet = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets/all" -Headers $header) | Where-Object {$_.Name -eq $librarySetName} # Add the variable set $project.IncludedLibraryVariableSetIds += $librarySet.Id # Update the project Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)" -Headers $header -Body ($project | ConvertTo-Json -Depth 10) ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "c:\octopus.client\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $librarySetName = "MyLibrarySet" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project $project = $repositoryForSpace.Projects.FindByName($projectName) # Get variable set $librarySet = $repositoryForSpace.LibraryVariableSets.FindByName($librarySetName) # Add set to project $project.IncludedLibraryVariableSetIds += $librarySet.Id # Update project $repositoryForSpace.Projects.Modify($project) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; string librarySetName = "MyLibrarySet"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get variable set var librarySet = repositoryForSpace.LibraryVariableSets.FindByName(librarySetName); // Include variable set to project project.IncludedLibraryVariableSetIds.Add(librarySet.Id); // Update project repositoryForSpace.Projects.Modify(project); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import head def get_octopus_resource(uri, headers, skip_count = 0): items = [] response = requests.get((uri + "?skip=" + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} project_name = "MyProject" library_set_name = "MyLibraryVariableSet" space_name = "Default" uri = '{0}/spaces/all'.format(octopus_server_uri) response = requests.get(uri, headers=headers) response.raise_for_status() # Get space spaces = json.loads(response.content.decode('utf-8')) space = next((x for x in spaces if x['Name'] == space_name), None) # Get project uri = '{0}/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) project = next((x for x in projects if x['Name'] == project_name), None) # Get variable set uri = '{0}/{1}/libraryvariablesets'.format(octopus_server_uri, space['Id']) library_sets = get_octopus_resource(uri, headers) library_set = next((x for x in library_sets if x['Name'] == library_set_name), None) # Check to see if project is none if project != None: if library_set != None: # Add set to project project['IncludedLibraryVariableSetIds'].append(library_set['Id']) # Update project uri = '{0}/{1}/projects/{2}'.format(octopus_server_uri, space['Id'], project['Id']) response = requests.put(uri, headers=headers, json=project) response.raise_for_status else: print ("Variable Set {0} not found!".format(library_set_name)) else: print ("Project {0} not found!".format(project_name)) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" projectName := "MyProject" librarySetName := "MyLibrarySet" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Get reference to project project := GetProject(apiURL, APIKey, space, projectName) // Get reference to variable set librarySet := GetLibrarySet(apiURL, APIKey, space, librarySetName, 0) // Add set to project if project != nil { if librarySet != nil { // Create client client := octopusAuth(apiURL, APIKey, space.ID) project.IncludedLibraryVariableSets = append(project.IncludedLibraryVariableSets, librarySet.ID) client.Projects.Update(project) } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func GetLibrarySet(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, librarySetName string, skip int) *octopusdeploy.LibraryVariableSet { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) librarySetsQuery := octopusdeploy.LibraryVariablesQuery { PartialName: librarySetName, } librarySets, err := client.LibraryVariableSets.Get(librarySetsQuery) if err != nil { log.Println(err) } if len(librarySets.Items) == librarySets.ItemsPerPage { // call again librarySet := GetLibrarySet(octopusURL, APIKey, space, librarySetName, (skip + len(librarySets.Items))) if librarySet != nil { return librarySet } } else { // Loop through returned items for _, librarySet := range librarySets.Items { if librarySet.Name == LifecycleName { return librarySet } } } return nil } ```
# Add or update project variable Source: https://octopus.com/docs/octopus-rest-api/examples/variables/add-update-project-variable.md This script demonstrates how to programmatically add or update a project variable. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to use - Name of the project - The variable properties including: - Variable name - Variable value - Variable type - If the variable is sensitive ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $spaceName = "Default" $projectName = "MyProject" $variable = @{ Name = "MyVariable" Value = "MyValue" Type = "String" IsSensitive = $false } # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get project $project = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header) | Where-Object {$_.Name -eq $projectName} # Get project variables $projectVariables = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Headers $header # Check to see if variable is already present $variableToUpdate = $projectVariables.Variables | Where-Object {$_.Name -eq $variable.Name} if ($null -eq $variableToUpdate) { # Create new object $variableToUpdate = New-Object -TypeName PSObject $variableToUpdate | Add-Member -MemberType NoteProperty -Name "Name" -Value $variable.Name $variableToUpdate | Add-Member -MemberType NoteProperty -Name "Value" -Value $variable.Value $variableToUpdate | Add-Member -MemberType NoteProperty -Name "Type" -Value $variable.Type $variableToUpdate | Add-Member -MemberType NoteProperty -Name "IsSensitive" -Value $variable.IsSensitive # Add to collection $projectVariables.Variables += $variableToUpdate $projectVariables.Variables } # Update the value $variableToUpdate.Value = $variable.Value # Update the collection Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Headers $header -Body ($projectVariables | ConvertTo-Json -Depth 10) ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "c:\octopus.client\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $projectName = "MyProject" $variable = @{ Name = "MyVariable" Value = "MyValue" Type = "String" IsSensitive = $false } $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get project [Octopus.Client.Model.ProjectResource]$project = $repositoryForSpace.Projects.FindByName($projectName) # Get project variables $projectVariables = $repositoryForSpace.VariableSets.Get($project.VariableSetId) # Check to see if variable exists $variableToUpdate = ($projectVariables.Variables | Where-Object {$_.Name -eq $variable.Name}) if ($null -eq $variableToUpdate) { # Create new object $variableToUpdate = New-Object Octopus.Client.Model.VariableResource $variableToUpdate.Name = $variable.Name $variableToUpdate.IsSensitive = $variable.IsSensitive $variableToUpdate.Value = $variable.Value $variableToUpdate.Type = $variable.Type # Add to collection $projectVariables.Variables.Add($variableToUpdate) } else { # Update the value $variableToUpdate.Value = $variable.Value } # Update the project variable $repositoryForSpace.VariableSets.Modify($projectVariables) } catch { Write-Host $_.Exception.Message } ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string projectName = "MyProject"; System.Collections.Hashtable variable = new System.Collections.Hashtable() { { "Name", "MyVariable" }, {"Value", "MyValue" }, {"Type", "String" }, {"IsSensitive", false } }; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get project var project = repositoryForSpace.Projects.FindByName(projectName); // Get project variables var projectVariables = repositoryForSpace.VariableSets.Get(project.VariableSetId); // Check to see if variable exists var variableToUpdate = projectVariables.Variables.FirstOrDefault(v => v.Name == (variable["Name"]).ToString()); if (variableToUpdate == null) { // Create new variable object variableToUpdate = new Octopus.Client.Model.VariableResource(); variableToUpdate.Name = variable["Name"].ToString(); variableToUpdate.Value = variable["Value"].ToString(); variableToUpdate.Type = (Octopus.Client.Model.VariableType)Enum.Parse(typeof(Octopus.Client.Model.VariableType), variable["Type"].ToString()); variableToUpdate.IsSensitive = bool.Parse(variable["IsSensitive"].ToString()); // Add to collection projectVariables.Variables.Add(variableToUpdate); } else { // Update value variableToUpdate.Value = variable["Value"].ToString(); } // Update collection repositoryForSpace.VariableSets.Modify(projectVariables); } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] response = requests.get((uri + "?skip=" + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} project_name = "MyProject" space_name = "Default" variable = { 'Name': 'MyVariable', 'Value': 'MyValue', 'Type': 'String', 'IsSensitive': False } uri = '{0}/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) uri = '{0}/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) project = next((x for x in projects if x['Name'] == project_name), None) if project != None: uri = '{0}/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], project['VariableSetId']) projectVariables = get_octopus_resource(uri, headers) projectVariable = next((x for x in projectVariables['Variables'] if x['Name'] == variable['Name']), None) if projectVariable == None: projectVariables['Variables'].append(variable) else: projectVariable['Value'] = variable['Value'] projectVariable['Type'] = variable['Type'] projectVariable['IsSensitive'] = variable ['IsSensitive'] response = requests.put(uri, headers=headers, json=projectVariables) response.raise_for_status ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" variable := octopusdeploy.NewVariable("MyVariable") variable.IsSensitive = false variable.Type = "String" variable.Value = "MyValue" projectName := "MyProject" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Get project reference project := GetProject(apiURL, APIKey, space, projectName) // Get project variables projectVariables := GetProjectVariables(apiURL, APIKey, space, project) variableFound := false for i := 0; i < len(projectVariables.Variables); i++ { if projectVariables.Variables[i].Name == variable.Name { projectVariables.Variables[i].IsSensitive = variable.IsSensitive projectVariables.Variables[i].Type = variable.Type projectVariables.Variables[i].Value = variable.Value variableFound = true break } } if !variableFound { projectVariables.Variables = append(projectVariables.Variables, variable) } // Update target client := octopusAuth(apiURL, APIKey, space.ID) client.Variables.Update(project.ID, projectVariables) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProject(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, projectName string) *octopusdeploy.Project { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) projectsQuery := octopusdeploy.ProjectsQuery { Name: projectName, } // Get specific project object projects, err := client.Projects.Get(projectsQuery) if err != nil { log.Println(err) } for _, project := range projects.Items { if project.Name == projectName { return project } } return nil } func GetProjectVariables(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, project *octopusdeploy.Project) octopusdeploy.VariableSet { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) // Get project variables projectVariables, err := client.Variables.GetAll(project.ID) if err != nil { log.Println(err) } return projectVariables } ```
# Clear sensitive variables Source: https://octopus.com/docs/octopus-rest-api/examples/variables/clear-sensitive-variables.md This script demonstrates how to programmatically clear all sensitive variables in Projects and Variable Sets in an Octopus instance. ## Usage Provide values for the following: - Octopus URL - Octopus API Key :::div{.warning} **This script will clear all sensitive variable values from an Octopus instance. Take care when running this script or one based on it.** ::: ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } Function Clear-SensitiveVariables { # Define function variables param ($VariableCollection) # Loop through variables foreach ($variable in $VariableCollection) { # Check for sensitive if ($variable.IsSensitive) { $variable.Value = [string]::Empty } } # Return collection return $VariableCollection } # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get all projects $projects = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header # Loop through projects foreach ($project in $projects) { # Get variable set $variableSet = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Headers $header # Check for variables if ($variableSet.Variables.Count -gt 0) { $variableSet.Variables = Clear-SensitiveVariables -VariableCollection $variableSet.Variables # Update set Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Body ($variableSet | ConvertTo-Json -Depth 10) -Headers $header } } # Get all variable sets $variableSets = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets/all" -Headers $header # Loop through variable sets foreach ($variableSet in $variableSets) { # Get the variable set $variableSet = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets/$($variableSet.Id)" -Headers $header # Check for variables if ($variableSet.Variables.Count -gt 0) { $variableSet.Variables = Clear-SensitiveVariables -VariableCollection $variableSet.Variables # Update set Invoke-RestMethod -Method Put -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets/$($variableSet.Id)" -Body ($variableSet | ConvertTo-Json -Depth 10) -Headers $header } } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint Function Clear-SensitiveVariables { # Define function variables param ($VariableSetId) # Get the variable set $variableSet = $repositoryForSpace.VariableSets.Get($VariableSetId) # Loop through variables foreach ($variable in $VariableSet) { # Check for sensitive if ($variable.IsSensitive) { $variable.Value = [string]::Empty } } # Update set $repositoryForSpace.VariableSets.Modify($variableSet) } try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Loop through projects foreach ($project in $repositoryForSpace.Projects.GetAll()) { # Clear the sensitive ones Clear-SensitiveVariables -VariableSetId $project.VariableSetId } # Loop through variable sets foreach ($librarySet in $repositoryForSpace.LibraryVariableSets.GetAll()) { # Clear sensitive ones Clear-SensitiveVariables -VariableSetId $librarySet.VariableSetId } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "default"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Loop through projects foreach (var project in repositoryForSpace.Projects.GetAll()) { var variableSet = repositoryForSpace.VariableSets.Get(project.VariableSetId); foreach (var variable in variableSet.Variables) { if (variable.IsSensitive) { variable.Value = string.Empty; } } repositoryForSpace.VariableSets.Modify(variableSet); } // Loop through variable sets foreach (var librarySet in repositoryForSpace.LibraryVariableSets.FindAll()) { var variableSet = repositoryForSpace.VariableSets.Get(librarySet.VariableSetId); foreach (var variable in variableSet.Variables) { if (variable.IsSensitive) { variable.Value = string.Empty; } } repositoryForSpace.VariableSets.Modify(variableSet); } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Define Octopus server variables octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "MySpace" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) # Get all projects uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) for project in projects: uri = '{0}{1}'.format(octopus_server_uri, project['Links']['Variables']) projectVariables = get_octopus_resource(uri, headers) variablesUpdated = False for variable in projectVariables['Variables']: if variable['IsSensitive']: variable['Value'] = "" variablesUpdated = True if variablesUpdated: print ('Clearing sensitive variables for project {0}'.format(project['Name'])) uri = '{0}{1}'.format(octopus_server_uri, project['Links']['Variables']) response = requests.put(uri, headers=headers, json=projectVariables) response.raise_for_status # Get all variable sets uri = '{0}/api/{1}/libraryvariablesets'.format(octopus_server_uri, space['Id']) variableSets = get_octopus_resource(uri, headers) for variableSet in variableSets: uri = '{0}{1}'.format(octopus_server_uri, variableSet['Links']['Variables']) libraryVariables = get_octopus_resource(uri, headers) variablesUpdated = False for variable in libraryVariables['Variables']: if variable['IsSensitive']: variable['Value'] = "" variablesUpdated = True if variablesUpdated: print ('Clearing sensitive variables for variable set {0}'.format(variableSet['Name'])) uri = '{0}{1}'.format(octopus_server_uri, variableSet['Links']['Variables']) response = requests.put(uri, headers=headers, json=libraryVariables) response.raise_for_status ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "MySpace" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Get reference to all projects projects := GetProjects(apiURL, APIKey, space) // Loop through projects for i := 0; i < len(projects); i++ { //projectVariables := GetProjectVariables(apiURL, APIKey, projects[i]) projectVariables := GetVariables(apiURL, APIKey, space, projects[i].ID) variablesUpdated := false for j := 0; j < len(projectVariables.Variables); j++ { if projectVariables.Variables[j].IsSensitive { projectVariables.Variables[j].Value = "" variablesUpdated = true } } if variablesUpdated { println("Variables for " + projects[i].Name + " have been updated") UpdateVariables(apiURL, APIKey, space, projectVariables.OwnerID, projectVariables) } } // Get reference to variable sets librarySets := GetLibraryVariableSets(apiURL, APIKey, space) // Loop through sets for i := 0; i < len(librarySets); i++ { librarySetVariables := GetVariables(apiURL, APIKey, space, librarySets[i].ID) variablesUpdated := false for j := 0; j < len(librarySetVariables.Variables); j++ { if librarySetVariables.Variables[j].IsSensitive { librarySetVariables.Variables[j].Value = "" variablesUpdated = true } } if variablesUpdated { println("Variables for " + librarySets[i].Name + " have been updated") UpdateVariables(apiURL, APIKey, space, librarySetVariables.OwnerID, librarySetVariables) } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetProjects(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space) []*octopusdeploy.Project { // Create client object client := octopusAuth(octopusURL, APIKey, space.ID) // Get all projects projects, err := client.Projects.GetAll() if err != nil { log.Println(err) } return projects } func GetVariables(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, ownerID string) octopusdeploy.VariableSet { // Create client object client := octopusAuth(octopusURL, APIKey, space.ID) // retrieve variables variables, err := client.Variables.GetAll(ownerID) if err != nil { log.Println(err) } return variables } func GetLibraryVariableSets(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space) []*octopusdeploy.LibraryVariableSet { // Create client object client := octopusAuth(octopusURL, APIKey, space.ID) librarySets, err := client.LibraryVariableSets.GetAll() if err != nil { log.Println(err) } return librarySets } func UpdateVariables(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, ownerID string, variables octopusdeploy.VariableSet) { client := octopusAuth(octopusURL, APIKey, space.ID) variableSet, err := client.Variables.Update(ownerID, variables) if err != nil { log.Println(err) } fmt.Println(variableSet.ID + " updated") } ```
# Find projects using variable set Source: https://octopus.com/docs/octopus-rest-api/examples/variables/find-projects-using-library-set.md This script demonstrates how to programmatically find all projects using a specific variable set. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to search - Name of the Variable Set to search for ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } $librarySetName = "MyLibrarySet" # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get variable set reference $librarySet = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets/all" -Headers $header) | Where-Object {$_.Name -eq $librarySetName} # Get all projects $projects = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header # Loop through projects Write-Host "The following projects are using $librarySetName" foreach ($project in $projects) { # Check to see if it's using the set if ($project.IncludedLibraryVariableSetIds -contains $librarySet.Id) { Write-Host "$($project.Name)" } } ```
PowerShell (Octopus.Client) ```powershell # Load octopus.client assembly Add-Type -Path "path\to\Octopus.Client.dll" # Octopus variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "default" $librarySetName = "MyLibrarySet" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint $octopusURL, $octopusAPIKey $repository = New-Object Octopus.Client.OctopusRepository $endpoint $client = New-Object Octopus.Client.OctopusClient $endpoint try { # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get variable set $librarySet = $repositoryForSpace.LibraryVariableSets.FindByName($librarySetName) # Get Projects $projects = $repositoryForSpace.Projects.GetAll() # Show all projects using set Write-Host "The following projects are using $librarySetName" foreach ($project in $projects) { if ($project.IncludedLibraryVariableSetIds -contains $librarySet.Id) { Write-Host "$($project.Name)" } } } catch { Write-Host $_.Exception.Message } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; // Declare working variables var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; string spaceName = "default"; string librarySetName = "MyLibrarySet"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); try { // Get space var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get projects var projects = repositoryForSpace.Projects.GetAll(); // Get variable set var librarySet = repositoryForSpace.LibraryVariableSets.FindByName(librarySetName); // Loop through projects Console.WriteLine(string.Format("The following projects are using {0}", librarySetName)); foreach (var project in projects) { if (project.IncludedLibraryVariableSetIds.Contains(librarySet.Id)) { Console.WriteLine(project.Name); } } } catch (Exception ex) { Console.WriteLine(ex.Message); return; } ```
Python3 ```python import json import requests octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) space_name = 'Default' library_set_name = 'Your variable set name' space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) library_variable_set = get_by_name('{0}/{1}/libraryvariablesets/all'.format(octopus_server_uri, space['Id']), library_set_name) library_variable_set_id = library_variable_set['Id'] projects = get_octopus_resource('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id'])) for project in projects: project_variable_sets = project['IncludedLibraryVariableSetIds'] if library_variable_set_id in project_variable_sets: print('Project \'{0}\' is using variable set \'{1}\''.format(project['Name'], library_set_name)) ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" librarySetName := "LibrarySetName" // Get reference to space space := GetSpace(apiURL, APIKey, spaceName) // Create client object client := octopusAuth(apiURL, APIKey, space.ID) // Get variable set librarySet := GetLibrarySet(apiURL, APIKey, space, librarySetName, 0) // Get events projects, err := client.Projects.GetAll() if err != nil { log.Println(err) } fmt.Println("The following projects use variable set " + librarySetName) // Loop through projects for i := 0; i < len(projects); i++ { if contains(projects[i].IncludedLibraryVariableSets, librarySet.ID) { fmt.Println(projects[i].Name) } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func contains(s []string, str string) bool { for _, v := range s { if v == str { return true } } return false } func GetLibrarySet(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, librarySetName string, skip int) *octopusdeploy.LibraryVariableSet { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) librarySetsQuery := octopusdeploy.LibraryVariablesQuery { PartialName: librarySetName, } librarySets, err := client.LibraryVariableSets.Get(librarySetsQuery) if err != nil { log.Println(err) } if len(librarySets.Items) == librarySets.ItemsPerPage { // call again librarySet := GetLibrarySet(octopusURL, APIKey, space, librarySetName, (skip + len(librarySets.Items))) if librarySet != nil { return librarySet } } else { // Loop through returned items for _, librarySet := range librarySets.Items { if librarySet.Name == LifecycleName { return librarySet } } } return nil } ```
# Find variable usage Source: https://octopus.com/docs/octopus-rest-api/examples/variables/find-variable-usage.md This script demonstrates how to programmatically find usages of a variable in all project variable sets (either a named match, or referenced in another variable), and optionally any deployment process or runbook processes. :::div{.hint} **Limitations:** Please note the limitations with this example: - It's not possible to use the REST API to search through sensitive variable values, as these values will be returned as `null`. - Variables that are referenced inside of any packages included as part of a deployment or runbook are not searched. ::: ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to search - Name of the variable to search for - Boolean value to toggle searching in a project's deployment process - Boolean value to toggle searching in a project's runbook processes - (Optional) Boolean value to toggle searching in variable sets - (Optional) path to export the results to a csv file ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Specify the Space to search in $spaceName = "Default" # Specify the Variable to find, without OctoStache syntax # e.g. For #{MyProject.Variable} -> use MyProject.Variable $variableToFind = "MyProject.Variable" # Search through Project's Deployment Processes? $searchDeploymentProcesses = $True # Search through Project's Runbook Processes? $searchRunbooksProcesses = $True # Search through Variable Set values? $searchVariableSets = $False # Optional: set a path to export to csv $csvExportPath = "" $variableTracking = @() $octopusURL = $octopusURL.TrimEnd('/') # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object { $_.Name -eq $spaceName } Write-Host "Looking for usages of variable named $variableToFind in space: '$spaceName'" # Function to process deployment steps function Process-DeploymentSteps { param( $steps, $project, $gitRef = $null ) $results = @() # Loop through steps foreach ($step in $steps) { $props = $step | Get-Member | Where-Object { $_.MemberType -eq "NoteProperty" } foreach ($prop in $props) { $propName = $prop.Name $json = $step.$propName | ConvertTo-Json -Compress -Depth 10 if ($null -ne $json -and ($json -like "*$variableToFind*")) { $result = [pscustomobject]@{ Project = $project.Name VariableSet = $null MatchType = "Step" Context = $step.Name Property = $propName AdditionalContext = $null Link = "$octopusURL$($project.Links.Web)/deployments/process/steps?actionId=$($step.Actions[0].Id)" } if ($gitRef) { $result | Add-Member -MemberType NoteProperty -Name "GitRef" -Value $gitRef } $results += $result } } } return $results } # Function to process runbook steps function Process-RunbookSteps { param( $steps, $project, $runbook, $gitRef = $null ) $results = @() # Loop through steps foreach ($step in $steps) { $props = $step | Get-Member | Where-Object { $_.MemberType -eq "NoteProperty" } foreach ($prop in $props) { $propName = $prop.Name $json = $step.$propName | ConvertTo-Json -Compress -Depth 10 if ($null -ne $json -and ($json -like "*$variableToFind*")) { $result = [pscustomobject]@{ Project = $project.Name VariableSet = $null MatchType = "Runbook Step" Context = $runbook.Name Property = $propName AdditionalContext = $step.Name Link = "$octopusURL$($project.Links.Web)/operations/runbooks/$($runbook.Id)/process/$($runbook.RunbookProcessId)/steps?actionId=$($step.Actions[0].Id)" } if ($gitRef) { $result | Add-Member -MemberType NoteProperty -Name "GitRef" -Value $gitRef } $results += $result } } } return $results } # Get all projects $projects = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header # Loop through projects foreach ($project in $projects) { Write-Host "Checking project '$($project.Name)'" # Get project variables $projectVariableSet = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Headers $header # Get all GitRefs for CaC project if ($project.IsVersionControlled) { $gitBranches = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/git/branches" -Headers $header $gitTags = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/git/tags" -Headers $header $gitRefs = @() foreach($branch in $gitBranches.Items) { $gitRefs += $branch.CanonicalName } foreach($tag in $gitTags.Items) { $gitRefs += $tag.CanonicalName } } # Check to see if variable is named in project variables. $matchingNamedVariables = $projectVariableSet.Variables | Where-Object { $_.Name -ieq "$variableToFind" } if ($null -ne $matchingNamedVariables) { foreach ($match in $matchingNamedVariables) { $result = [pscustomobject]@{ Project = $project.Name VariableSet = $null MatchType = "Named Project Variable" Context = $match.Name Property = $null AdditionalContext = $match.Value Link = "$octopusURL$($project.Links.Web)/variables" } # Add and de-dupe later $variableTracking += $result } } # Check to see if variable is referenced in other project variable values. $matchingValueVariables = $projectVariableSet.Variables | Where-Object { $_.Value -like "*#{$variableToFind}*" } if ($null -ne $matchingValueVariables) { foreach ($match in $matchingValueVariables) { $result = [pscustomobject]@{ Project = $project.Name VariableSet = $null MatchType = "Referenced Project Variable" Context = $match.Name Property = $null AdditionalContext = $match.Value Link = "$octopusURL$($project.Links.Web)/variables" } # Add and de-dupe later $variableTracking += $result } } # Search Deployment process if enabled if ($searchDeploymentProcesses -eq $True) { if ($project.IsVersionControlled) { # For CaC Projects, loop through GitRefs foreach ($gitRef in $gitRefs) { $escapedGitRef = [Uri]::EscapeDataString($gitRef) $processUrl = "$octopusURL/api/$($space.Id)/projects/$($project.Id)/$($escapedGitRef)/deploymentprocesses" # Get project deployment process $deploymentProcess = (Invoke-RestMethod -Method Get -Uri $processUrl -Headers $header) # Add and de-dupe later $variableTracking += Process-DeploymentSteps -steps $deploymentProcess.Steps -project $project -gitRef $gitRef } } else { # Get project deployment process $deploymentProcess = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/deploymentprocesses/$($project.DeploymentProcessId)" -Headers $header) # Add and de-dupe later $variableTracking += Process-DeploymentSteps -steps $deploymentProcess.Steps -project $project } } # Search Runbook processes if enabled if ($searchRunbooksProcesses -eq $True) { # Get project runbooks $runbooks = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/runbooks?skip=0&take=5000" -Headers $header) # Loop through each runbook foreach ($runbook in $runbooks.Items) { # For CaC Projects, loop through GitRefs if ($project.IsVersionControlled) { foreach ($gitRef in $gitRefs) { $escapedGitRef = [Uri]::EscapeDataString($gitRef) $processUrl = "$octopusURL/api/$($space.Id)/projects/$($project.Id)/$($escapedGitRef)/runbookprocesses/$($runbook.RunbookProcessId)" # Get runbook process $runbookProcess = (Invoke-RestMethod -Method Get -Uri $processUrl -Headers $header) # Add and de-dupe later $variableTracking += Process-RunbookSteps -steps $runbookProcess.Steps -project $project -runbook $runbook -gitRef $gitRef } } else { # Get runbook process $runbookProcess = (Invoke-RestMethod -Method Get -Uri "$octopusURL$($runbook.Links.RunbookProcesses)" -Headers $header) # Add and de-dupe later $variableTracking += Process-RunbookSteps -steps $runbookProcess.Steps -project $project -runbook $runbook } } } } if ($searchVariableSets -eq $True) { $VariableSets = (Invoke-RestMethod -Method Get "$OctopusURL/api/libraryvariablesets?contentType=Variables" -Headers $header).Items foreach ($VariableSet in $VariableSets) { Write-Host "Checking Variable Set: $($VariableSet.Name)" $variables = (Invoke-RestMethod -Method Get "$OctopusURL/$($VariableSet.Links.Variables)" -Headers $header).Variables | Where-Object { $_.Value -like "*#{$variableToFind}*" } $link = ($VariableSet.Links.Self -replace "/api", "app#") -replace "/libraryvariablesets/", "/library/variables/" foreach ($variable in $variables) { $result = [pscustomobject]@{ Project = $null VariableSet = $VariableSet.Name MatchType = "Variable Set" Context = $variable.Name Property = $null AdditionalContext = $variable.Value Link = "$octopusURL$($link)" } # Add and de-dupe later $variableTracking += $result } } } # De-dupe $variableTracking = @($variableTracking | Sort-Object -Property * -Unique) if ($variableTracking.Count -gt 0) { Write-Host "" Write-Host "Found $($variableTracking.Count) results:" $variableTracking if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $variableTracking | Export-Csv -Path $csvExportPath -NoTypeInformation } } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $variableToFind = "MyProject.Variable" $searchDeploymentProcesses = $true $searchRunbookProcesses = $true $csvExportPath = "path:\to\CSVFile.csv" $variableTracking = @() $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) Write-Host "Looking for usages of variable named $variableToFind in space $($space.Name)" # Get all projects $projects = $repositoryForSpace.Projects.GetAll() # Loop through projects foreach ($project in $projects) { Write-Host "Checking $($project.Name)" # Get variable set $projectVariableSet = $repositoryForSpace.VariableSets.Get($project.VariableSetId) # Find any name matches $matchingNamedVariable = $projectVariableSet.Variables | Where-Object {$_.Name -ieq "$variableToFind"} if ($null -ne $matchingNamedVariable) { foreach ($match in $matchingNamedVariable) { # Create new hash table $result = [pscustomobject]@{ Project = $project.Name MatchType = "Named Project Variable" Context = $match.Name Property = $null AdditionalContext = $match.Value Link = $project.Links["Variables"] } $variableTracking += $result } } # Find any value matches $matchingValueVariables = $projectVariableSet.Variables | Where-Object {$_.Value -like "*#{$variableToFind}*"} if ($null -ne $matchingValueVariables) { foreach ($match in $matchingValueVariables) { $result = [pscustomobject]@{ Project = $project.Name MatchType = "Referenced Project Variable" Context = $match.Name Property = $null AdditionalContext = $match.Value Link = $project.Links["Variables"] } $variableTracking += $result } } if ($searchDeploymentProcesses -eq $true) { if ($project.IsVersionControlled -ne $true) { # Get deployment process $deploymentProcess = $repositoryForSpace.DeploymentProcesses.Get($project.DeploymentProcessId) # Loop through steps foreach ($step in $deploymentProcess.Steps) { foreach ($action in $step.Actions) { foreach ($property in $action.Properties.Keys) { if ($action.Properties[$property].Value -like "*$variableToFind*") { $result = [pscustomobject]@{ Project = $project.Name MatchType = "Step" Context = $step.Name Property = $property AdditionalContext = $null Link = "$octopusURL$($project.Links.Web)/deployments/process/steps?actionid=$($action.Id)" } $variableTracking += $result } } } } } else { Write-Host "$($project.Name) is version controlled, skipping searching the deployment process." } } if ($searchRunbookProcesses -eq $true) { # Get project runbooks $runbooks = $repositoryForSpace.Projects.GetAllRunbooks($project) # Loop through runbooks foreach ($runbook in $runbooks) { # Get Runbook process $runbookProcess = $repositoryForSpace.RunbookProcesses.Get($runbook.RunbookProcessId) foreach ($step in $runbookProcess.Steps) { foreach ($action in $step.Actions) { foreach ($property in $action.Properties.Keys) { if ($action.Properties[$property].Value -like "*$variableToFind*") { $result = [pscustomobject]@{ Project = $project.Name MatchType = "Runbook Step" Context = $runbook.Name Property = $property AdditionalContext = $step.Name Link = "$octopusURL$($project.Links.Web)/operations/runbooks/$($runbook.Id)/process/$($runbook.RunbookProcessId)/steps?actionId=$($action.Id)" } $variableTracking += $result } } } } } } } # De-duplicate $variableTracking = @($variableTracking | Sort-Object -Property * -Unique) if ($variableTracking.Count -gt 0) { Write-Host "" Write-Host "Found $($variableTracking.Count) results:" $variableTracking if(![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $variableTracking | Export-Csv -Path $csvExportPath -NoTypeInformation } } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; class VariableResult { // Define private variables public string Project { get; set; } public string MatchType { get; set; } public string Context { get;set; } public string Property { get;set; } public string AdditionalContext { get;set; } public string Link { get; set; } } var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; string variableToFind = "MyProject.Variable"; bool searchDeploymentProcess = true; bool searchRunbookProcess = true; string csvExportPath = "path:\\to\\variable.csv"; System.Collections.Generic.List variableTracking = new System.Collections.Generic.List(); // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get space repository var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); Console.WriteLine(string.Format("Looking for usages of variable named {0} in space {1}", variableToFind, space.Name)); // Get all projects var projects = repositoryForSpace.Projects.GetAll(); // Loop through projects foreach (var project in projects) { Console.WriteLine(string.Format("Checking {0}", project.Name)); // Get the project variable set var projectVariableSet = repositoryForSpace.VariableSets.Get(project.VariableSetId); var matchingNameVariable = projectVariableSet.Variables.Where(v => v.Name.ToLower().Contains(variableToFind.ToLower())); // Match on name if (matchingNameVariable != null) { // Loop through results foreach (var match in matchingNameVariable) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Named Project Variable"; result.Context = match.Name; result.Property = null; result.AdditionalContext = match.Value; result.Link = project.Links["Variables"]; if (!variableTracking.Contains(result)) { variableTracking.Add(result); } } } // Match on value var matchingValueVariable = projectVariableSet.Variables.Where(v => v.Value != null && v.Value.ToLower().Contains(variableToFind.ToLower())); if (matchingValueVariable != null) { // Loop through results foreach (var match in matchingValueVariable) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Referenced Project Variable"; result.Context = match.Name; result.Property = null; result.AdditionalContext = match.Value; result.Link = project.Links["Variables"]; if (!variableTracking.Contains(result)) { variableTracking.Add(result); } } } if (searchDeploymentProcess) { if(!project.IsVersionControlled) { // Get deployment process var deploymentProcess = repositoryForSpace.DeploymentProcesses.Get(project.DeploymentProcessId); // Loop through steps foreach (var step in deploymentProcess.Steps) { // Loop through actions foreach (var action in step.Actions) { // Loop through properties foreach (var property in action.Properties.Keys) { if (action.Properties[property].Value != null && action.Properties[property].Value.ToLower().Contains(variableToFind.ToLower())) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Step"; result.Context = step.Name; result.Property = property; result.AdditionalContext = null; result.Link = string.Format("{0}{1}/deployments/process/steps?actionid={2}", octopusURL, project.Links["Web"], action.Id); if (!variableTracking.Contains(result)) { variableTracking.Add(result); } } } } } } else { Console.WriteLine(string.Format("{0} is version controlled, skipping searching the deployment process.", project.Name)); } } if (searchRunbookProcess) { // Get project runbooks var runbooks = repositoryForSpace.Projects.GetAllRunbooks(project); // Loop through runbooks foreach (var runbook in runbooks) { // Get runbook process var runbookProcess = repositoryForSpace.RunbookProcesses.Get(runbook.RunbookProcessId); // Loop through steps foreach (var step in runbookProcess.Steps) { foreach (var action in step.Actions) { foreach (var property in action.Properties.Keys) { if (action.Properties[property].Value != null && action.Properties[property].Value.ToLower().Contains(variableToFind.ToLower())) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Runbook Step"; result.Context = runbook.Name; result.Property = property; result.AdditionalContext = step.Name; result.Link = string.Format("{0}{1}/operations/runbooks/{2}/process/{3}/steps?actionId={4}", octopusURL, project.Links["Web"], runbook.Id, runbookProcess.Id, action.Id); if (!variableTracking.Contains(result)) { variableTracking.Add(result); } } } } } } } } Console.WriteLine(string.Format("Found {0} results", variableTracking.Count.ToString())); if (variableTracking.Count > 0) { foreach (var result in variableTracking) { System.Collections.Generic.List header = new System.Collections.Generic.List(); System.Collections.Generic.List row = new System.Collections.Generic.List(); var isFirstRow = variableTracking.IndexOf(result) == 0; var properties = result.GetType().GetProperties(); foreach (var property in properties) { Console.WriteLine(string.Format("{0}: {1}", property.Name, property.GetValue(result))); if (isFirstRow) { header.Add(property.Name); } row.Add((property.GetValue(result) == null ? string.Empty : property.GetValue(result).ToString())); } if (!string.IsNullOrWhiteSpace(csvExportPath)) { using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(csvExportPath, true)) { if (isFirstRow) { // Write header csvFile.WriteLine(string.Join(",", header.ToArray())); } // Write result csvFile.WriteLine(string.Join(",", row.ToArray())); } } } } ```
Python3 ```python import json import requests import csv octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) # Specify the Space to search in space_name = 'Default' # Specify the Variable to find, without OctoStache syntax # e.g. For #{MyProject.Variable} -> use MyProject.Variable variable_name = 'MyProject.Variable' # Search through Project's Deployment Processes? search_deployment_processes = True # Search through Project's Runbook Processes? search_runbook_processes = True # Optional: set a path to export to csv csv_export_path = '' variable_tracker = [] octopus_server_uri = octopus_server_uri.rstrip('/') octopus_server_base_uri = octopus_server_uri.rstrip('api') space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) print('Looking for usages of variable named \'{0}\' in space \'{1}\''.format(variable_name, space_name)) projects = get_octopus_resource('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id'])) for project in projects: project_name = project['Name'] project_web_uri = project['Links']['Web'].lstrip('/') print('Checking project \'{0}\''.format(project_name)) project_variable_set = get_octopus_resource('{0}/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], project['VariableSetId'])) # Check to see if variable is named in project variables. matching_named_variables = [variable for variable in project_variable_set['Variables'] if variable_name in variable['Name']] if matching_named_variables is not None: for variable in matching_named_variables: tracked_variable = { 'Project': project_name, 'MatchType': 'Named Project Variable', 'Context': variable['Name'], 'AdditionalContext': None, 'Property': None, 'Link': '{0}{1}/variables'.format(octopus_server_base_uri, project_web_uri) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) # Check to see if variable is referenced in other project variable values. matching_value_variables = [variable for variable in project_variable_set['Variables'] if variable['Value'] is not None and variable_name in variable['Value']] if matching_value_variables is not None: for variable in matching_value_variables: tracked_variable = { 'Project': project_name, 'MatchType': 'Referenced Project Variable', 'Context': variable['Name'], 'AdditionalContext': variable['Value'], 'Property': None, 'Link': '{0}{1}/variables'.format(octopus_server_base_uri, project_web_uri) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) # Search Deployment process if enabled if search_deployment_processes == True: deployment_process = get_octopus_resource('{0}/{1}/deploymentprocesses/{2}'.format(octopus_server_uri, space['Id'], project['DeploymentProcessId'])) for step in deployment_process['Steps']: for step_key in step.keys(): step_property_value = str(step[step_key]) if step_property_value is not None and variable_name in step_property_value: tracked_variable = { 'Project': project_name, 'MatchType': 'Step', 'Context': step['Name'], 'Property': step_key, 'AdditionalContext': None, 'Link': '{0}{1}/deployments/process/steps?actionId={2}'.format(octopus_server_base_uri, project_web_uri, step['Actions'][0]['Id']) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) # Search Runbook processes if configured if search_runbook_processes == True: runbooks_resource = get_octopus_resource('{0}/{1}/projects/{2}/runbooks?skip=0&take=5000'.format(octopus_server_uri, space['Id'], project['Id'])) runbooks = runbooks_resource['Items'] for runbook in runbooks: runbook_processes_link = runbook['Links']['RunbookProcesses'] runbook_process = get_octopus_resource('{0}/{1}'.format(octopus_server_base_uri, runbook_processes_link)) for step in runbook_process['Steps']: for step_key in step.keys(): step_property_value = str(step[step_key]) if step_property_value is not None and variable_name in step_property_value: tracked_variable = { 'Project': project_name, 'MatchType': 'Runbook Step', 'Context': runbook['Name'], 'Property': step_key, 'AdditionalContext': step['Name'], 'Link': '{0}{1}/operations/runbooks/{2}/process/{3}/steps?actionId={4}'.format(octopus_server_base_uri, project_web_uri, runbook['Id'], runbook['RunbookProcessId'], step['Actions'][0]['Id']) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) results_count = len(variable_tracker) if results_count > 0: print('') print('Found {0} results:'.format(results_count)) for tracked_variable in variable_tracker: print('Project : {0}'.format(tracked_variable['Project'])) print('MatchType : {0}'.format(tracked_variable['MatchType'])) print('Context : {0}'.format(tracked_variable['Context'])) print('AdditionalContext : {0}'.format(tracked_variable['AdditionalContext'])) print('Property : {0}'.format(tracked_variable['Property'])) print('Link : {0}'.format(tracked_variable['Link'])) print('') if csv_export_path: with open(csv_export_path, mode='w') as csv_file: fieldnames = ['Project', 'MatchType', 'Context', 'AdditionalContext', 'Property', 'Link'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() for tracked_variable in variable_tracker: writer.writerow(tracked_variable) ```
Go ```go package main import ( "bufio" "fmt" "log" "net/url" "os" "reflect" "strconv" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type VariableResult struct { Project string MatchType string Context string Property string AdditionalContext string Link string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" variableToFind := "MyProject.Variable" searchDeploymentProcess := true searchRunbookProcess := true csvExportPath := "path:\\to\\variable.csv" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get space space := GetSpace(apiURL, APIKey, spaceName) client = octopusAuth(apiURL, APIKey, space.ID) variableTracking := []VariableResult{} // Get projects projects, err := client.Projects.GetAll() if err != nil { log.Println(err) } // Loop through projects for _, project := range projects { fmt.Printf("Checking %[1]s \n", project.Name) // Get variables projectVariables, err := client.Variables.GetAll(project.ID) if err != nil { log.Println(err) } for _, variable := range projectVariables.Variables { nameMatch := strings.Contains(variable.Name, variableToFind) if err != nil { log.Println(err) } if nameMatch { result := VariableResult{} result.Project = project.Name result.MatchType = "Named Project Variable" result.Context = variable.Name result.Property = "" result.AdditionalContext = variable.Value result.Link = project.Links["Variables"] if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } valueMatch := strings.Contains(variable.Value, variableToFind) if err != nil { log.Println(err) } if valueMatch { result := VariableResult{} result.Project = project.Name result.MatchType = "Referenced Project Variable" result.Context = variable.Name result.Property = "" result.AdditionalContext = variable.Value result.Link = project.Links["Variables"] if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } if searchDeploymentProcess { if !project.IsVersionControlled { // Get deployment process deploymentProcess, err := client.DeploymentProcesses.GetByID(project.DeploymentProcessID) if err != nil { log.Println(err) } for _, step := range deploymentProcess.Steps { for _, action := range step.Actions { for property := range action.Properties { if strings.Contains(action.Properties[property].Value, variableToFind) { result := VariableResult{} result.Project = project.Name result.MatchType = "Step" result.Context = step.Name result.Property = property result.AdditionalContext = "" result.Link = apiURL.String() + project.Links["Web"] + "/deployments/process/steps?actionId=" + action.ID if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } } } else { fmt.Printf("%[1]s is version controlled, skipping searching deployment process", project.Name) } } if searchRunbookProcess { // Get project runbooks runbooks := GetRunbooks(client, project) // Loop through runbooks for _, runbook := range runbooks { // Get runbook process runbookProcess, err := client.RunbookProcesses.GetByID(runbook.RunbookProcessID) if err != nil { log.Println(err) } for _, step := range runbookProcess.Steps { for _, action := range step.Actions { for property := range action.Properties { if strings.Contains(action.Properties[property].Value, variableToFind) { result := VariableResult{} result.Project = project.ID result.MatchType = "Runbook Step" result.Context = runbook.Name result.Property = property result.AdditionalContext = step.Name result.Link = apiURL.String() + project.Links["Web"] + "/operations/runbooks/" + runbook.ID + "/process/" + runbook.RunbookProcessID + "/steps?actionId=" + action.ID if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } } } } } if len(variableTracking) > 0 { fmt.Printf("Found %[1]s results \n", strconv.Itoa(len(variableTracking))) for i := 0; i < len(variableTracking); i++ { row := []string{} header := []string{} isFirstRow := false if i == 0 { isFirstRow = true } e := reflect.ValueOf(&variableTracking[i]).Elem() for j := 0; j < e.NumField(); j++ { if isFirstRow { header = append(header, e.Type().Field(j).Name) } row = append(row, e.Field(j).Interface().(string)) } if csvExportPath != "" { file, err := os.OpenFile(csvExportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) if isFirstRow { dataWriter.WriteString(strings.Join(header, ",") + "\n") } dataWriter.WriteString(strings.Join(row, ",") + "\n") dataWriter.Flush() file.Close() } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func arrayContains(array []VariableResult, result VariableResult) bool { for _, v := range array { if v == result { return true } } return false } func GetRunbooks(client *octopusdeploy.Client, project *octopusdeploy.Project) []*octopusdeploy.Runbook { // Get runbook runbooks, err := client.Runbooks.GetAll() projectRunbooks := []*octopusdeploy.Runbook{} if err != nil { log.Println(err) } for i := 0; i < len(runbooks); i++ { if runbooks[i].ProjectID == project.ID { projectRunbooks = append(projectRunbooks, runbooks[i]) } } return projectRunbooks } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } ```
# Find variable value usage Source: https://octopus.com/docs/octopus-rest-api/examples/variables/find-variable-value-usage.md This script demonstrates how to programmatically find usages of a variable value in all projects and variable sets. You could use this to help replace values in a connection string if a server name or IP has changed. :::div{.hint} **Limitations:** Please note the limitations with this example: - It's not possible to use the REST API to search through sensitive variable values. ::: ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to search - Name of the variable value to search for - Optional path to export the results to a csv file ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Specify the Space to search in $spaceName = "Default" # Specify the Variable Value to find, without OctoStache syntax $variableValueToFind = "MyTestValue" # Optional: set a path to export to csv $csvExportPath = "" $variableTracking = @() $octopusURL = $octopusURL.TrimEnd('/') # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} Write-Host "Looking for usages of variable value named '$variableValueToFind' in space: '$spaceName'" # Get variables from variable sets $variableSets = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets?contentType=Variables" -Headers $header foreach ($variableSet in $variableSets.Items) { Write-Host "Checking variable set '$($variableSet.Name)'" $variableSetVariables = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/variableset-$($variableSet.Id)" -Headers $header $matchingNamedVariables = $variableSetVariables.Variables | Where-Object {$_.Value -like "*$variableValueToFind*"} if($null -ne $matchingNamedVariables){ foreach($match in $matchingNamedVariables){ $result = [PSCustomObject]@{ Project = $null VariableSet = $variableSet.Name MatchType = "Value in Variable Set" Context = $match.Value Property = $null AdditionalContext = $match.Name } $variableTracking += $result } } } # Get all projects $projects = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header # Loop through projects foreach ($project in $projects) { Write-Host "Checking project '$($project.Name)'" # Get project variables $projectVariableSet = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Headers $header # Check to see if variable is named in project variables. $ProjectMatchingNamedVariables = $projectVariableSet.Variables | Where-Object {$_.Value -like "*$variableValueToFind*"} if($null -ne $ProjectMatchingNamedVariables) { foreach($match in $ProjectMatchingNamedVariables) { $result = [pscustomobject]@{ Project = $project.Name VariableSet = $null MatchType = "Named Project Variable" Context = $match.Value Property = $null AdditionalContext = $match.Name } # Add to tracking list $variableTracking += $result } } } if($variableTracking.Count -gt 0) { Write-Host "" Write-Host "Found $($variableTracking.Count) results:" $variableTracking if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $variableTracking | Export-Csv -Path $csvExportPath -NoTypeInformation } } ```
PowerShell (Octopus.Client) ```powershell # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $variableValueToFind = "MyValue" $csvExportPath = "c:\temp\variable.csv" $variableTracking = @() $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) Write-Host "Looking for usages of variable value '$variableValueToFind' in space: $($space.Name)" # Get all variable sets $variableSets = $repositoryForSpace.LibraryVariableSets.GetAll() # Loop through variable sets foreach ($variableSet in $variableSets) { Write-Host "Checking variable set: $($variableSet.Name)" # Get variables associated with variable set $variables = $repositoryForSpace.VariableSets.Get($variableSet.VariableSetId) $matchingNamedVariables = $variableSetVariables.Variables | Where-Object {$_.Value -like "*$variableValueToFind*"} if($null -ne $matchingNamedVariables){ foreach($match in $matchingNamedVariables){ $result = [PSCustomObject]@{ Project = $null VariableSet = $variableSet.Name MatchType = "Value in Variable Set" Context = $match.Value Property = $null AdditionalContext = $match.Name } $variableTracking += $result } } } # Get all projects $projects = $repositoryForSpace.Projects.GetAll() # Loop through projects foreach ($project in $projects) { Write-Host "Checking project '$($project.Name)'" # Get project variables $projectVariableSet = $repositoryForSpace.VariableSets.Get($project.VariableSetId) # Check to see if variable is named in project variables. $ProjectMatchingNamedVariables = $projectVariableSet.Variables | Where-Object {$_.Value -like "*$variableValueToFind*"} if($null -ne $ProjectMatchingNamedVariables) { foreach($match in $ProjectMatchingNamedVariables) { $result = [pscustomobject]@{ Project = $project.Name VariableSet = $null MatchType = "Named Project Variable" Context = $match.Value Property = $null AdditionalContext = $match.Name } # Add to tracking list $variableTracking += $result } } } if($variableTracking.Count -gt 0) { Write-Host "" Write-Host "Found $($variableTracking.Count) results:" $variableTracking if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $variableTracking | Export-Csv -Path $csvExportPath -NoTypeInformation } } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; class VariableResult { // Define private variables public string Project { get; set; } public string MatchType { get; set; } public string Context { get;set; } public string Property { get;set; } public string AdditionalContext { get;set; } public string VariableSet { get; set; } } var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; string variableValueToFind = "MyValue"; string csvExportPath = "path:\\to\\variable.csv"; System.Collections.Generic.List variableTracking = new System.Collections.Generic.List(); // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get space repository var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); Console.WriteLine(string.Format("Looking for usages of variable value {0} in space {1}", variableValueToFind, space.Name)); // Get all variable sets var variableSets = repositoryForSpace.LibraryVariableSets.FindAll(); // Loop through variable sets foreach (var variableSet in variableSets) { Console.WriteLine(string.Format("Checking variable set: {0}", variableSet.Name)); // Get the variables var variables = repositoryForSpace.VariableSets.Get(variableSet.VariableSetId); // Get matches var matchingValueVariable = variables.Variables.Where(v => v.Value != null && v.Value.ToLower().Contains(variableValueToFind.ToLower())); if (matchingValueVariable != null) { foreach (var match in matchingValueVariable) { VariableResult result = new VariableResult(); result.Project = null; result.VariableSet = variableSet.Name; result.MatchType = "Value in Variable Set"; result.Context = match.Value; result.AdditionalContext = match.Name; if (!variableTracking.Contains(result)) { variableTracking.Add(result); } } } } // Get all projects var projects = repositoryForSpace.Projects.GetAll(); // Loop through projects foreach (var project in projects) { Console.WriteLine(string.Format("Checking {0}", project.Name)); // Get the project variable set var projectVariableSet = repositoryForSpace.VariableSets.Get(project.VariableSetId); var matchingNameVariable = projectVariableSet.Variables.Where(v => v.Value != null && v.Value.ToLower().Contains(variableValueToFind.ToLower())); // Match on name if (matchingNameVariable != null) { // Loop through results foreach (var match in matchingNameVariable) { VariableResult result = new VariableResult(); result.Project = project.Name; result.VariableSet = null; result.MatchType = "Named Project Variable"; result.Context = match.Value; result.Property = null; result.AdditionalContext = match.Name; if (!variableTracking.Contains(result)) { variableTracking.Add(result); } } } } Console.WriteLine(string.Format("Found {0} results", variableTracking.Count.ToString())); if (variableTracking.Count > 0) { foreach (var result in variableTracking) { System.Collections.Generic.List header = new System.Collections.Generic.List(); System.Collections.Generic.List row = new System.Collections.Generic.List(); var isFirstRow = variableTracking.IndexOf(result) == 0; var properties = result.GetType().GetProperties(); foreach (var property in properties) { Console.WriteLine(string.Format("{0}: {1}", property.Name, property.GetValue(result))); if (isFirstRow) { header.Add(property.Name); } row.Add((property.GetValue(result) == null ? string.Empty : property.GetValue(result).ToString())); } if (!string.IsNullOrWhiteSpace(csvExportPath)) { using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(csvExportPath, true)) { if (isFirstRow) { // Write header csvFile.WriteLine(string.Join(",", header.ToArray())); } // Write result csvFile.WriteLine(string.Join(",", row.ToArray())); } } } } ```
Python3 ```python import json import requests import csv octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if hasattr(results, 'keys') and 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items # Specify the Space to search in space_name = 'Default' # Specify the Variable to find, without OctoStache syntax # e.g. For #{MyProject.Variable} -> use MyProject.Variable variable_value = 'MyValue' csv_export_path = 'path:\\to\\variable.csv' # Optional: set a path to export to csv variable_tracker = [] uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) print('Looking for usages of variable named \'{0}\' in space \'{1}\''.format(variable_value, space_name)) uri = '{0}/api/{1}/projects'.format(octopus_server_uri, space['Id']) projects = get_octopus_resource(uri, headers) for project in projects: project_name = project['Name'] project_web_uri = project['Links']['Web'].lstrip('/') print('Checking project \'{0}\''.format(project_name)) uri = '{0}/api/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], project['VariableSetId']) project_variable_set = get_octopus_resource(uri, headers) # Check to see if variable is named in project variables. matching_value_variables = [variable for variable in project_variable_set['Variables'] if variable['Value'] != None and variable_value in variable['Value']] if matching_value_variables is not None: for variable in matching_value_variables: tracked_variable = { 'Project': project_name, 'MatchType': 'Named Project Variable', 'Context': variable['Name'], 'AdditionalContext': None, 'Property': None, 'VariableSet': None } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) # Get variable sets uri = '{0}/api/{1}/libraryvariablesets?contentType=Variables'.format(octopus_server_uri, space['Id']) variable_sets = get_octopus_resource(uri, headers) for variable_set in variable_sets: uri = '{0}/api/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], variable_set['VariableSetId']) variables = get_octopus_resource(uri, headers) matching_value_variables = [variable for variable in variables['Variables'] if variable['Value'] != None and variable_value in variable['Value']] if matching_value_variables is not None: for variable in matching_value_variables: tracked_variable = { 'Project': None, 'VariableSet': variable_set['Name'], 'MatchType': 'Value in Variable Set', 'Context': variable['Value'], 'Property': None, 'AdditionalContext': variable['Name'] } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) results_count = len(variable_tracker) if results_count > 0: print('') print('Found {0} results:'.format(results_count)) for tracked_variable in variable_tracker: print('Project : {0}'.format(tracked_variable['Project'])) print('MatchType : {0}'.format(tracked_variable['MatchType'])) print('Context : {0}'.format(tracked_variable['Context'])) print('AdditionalContext : {0}'.format(tracked_variable['AdditionalContext'])) print('Property : {0}'.format(tracked_variable['Property'])) print('VariableSet : {0}'.format(tracked_variable['VariableSet'])) print('') if csv_export_path: with open(csv_export_path, mode='w') as csv_file: fieldnames = ['Project', 'MatchType', 'Context', 'AdditionalContext', 'Property', 'VariableSet'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() for tracked_variable in variable_tracker: writer.writerow(tracked_variable) ```
Go ```go package main import ( "bufio" "fmt" "log" "net/url" "os" "reflect" "strconv" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type VariableResult struct { Project string MatchType string Context string Property string AdditionalContext string VariableSet string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" variableValueToFind := "MyValue" csvExportPath := "path:\\to\\variable.csv" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get space space := GetSpace(apiURL, APIKey, spaceName) client = octopusAuth(apiURL, APIKey, space.ID) variableTracking := []VariableResult{} // Get projects projects, err := client.Projects.GetAll() if err != nil { log.Println(err) } // Loop through projects for _, project := range projects { fmt.Printf("Checking %[1]s \n", project.Name) // Get variables projectVariables, err := client.Variables.GetAll(project.ID) if err != nil { log.Println(err) } for _, variable := range projectVariables.Variables { valueMatch := strings.Contains(variable.Value, variableValueToFind) if err != nil { log.Println(err) } if valueMatch { result := VariableResult{} result.Project = project.Name result.MatchType = "Named Project Variable" result.Context = variable.Name result.Property = "" result.AdditionalContext = variable.Name result.VariableSet = "" if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } // Get variable sets variableSets, err := client.LibraryVariableSets.GetAll() if err != nil { log.Println(err) } // Loop through variable sets for _, variableSet := range variableSets { fmt.Printf("Checking variable set: %[1]s \n", variableSet.Name) // Get variables for set variables, err := client.Variables.GetAll(variableSet.ID) if err != nil { log.Println(err) } for _, variable := range variables.Variables { valueMatch := strings.Contains(variable.Value, variableValueToFind) if valueMatch { result := VariableResult{} result.Project = "" result.MatchType = "Value in Variable Set" result.Context = variable.Value result.Property = "" result.AdditionalContext = variable.Name result.VariableSet = "" if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } if len(variableTracking) > 0 { fmt.Printf("Found %[1]s results \n", strconv.Itoa(len(variableTracking))) for i := 0; i < len(variableTracking); i++ { row := []string{} header := []string{} isFirstRow := false if i == 0 { isFirstRow = true } e := reflect.ValueOf(&variableTracking[i]).Elem() for j := 0; j < e.NumField(); j++ { if isFirstRow { header = append(header, e.Type().Field(j).Name) } row = append(row, e.Field(j).Interface().(string)) } if csvExportPath != "" { file, err := os.OpenFile(csvExportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) if isFirstRow { dataWriter.WriteString(strings.Join(header, ",") + "\n") } dataWriter.WriteString(strings.Join(row, ",") + "\n") dataWriter.Flush() file.Close() } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func arrayContains(array []VariableResult, result VariableResult) bool { for _, v := range array { if v == result { return true } } return false } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } ```
# Find variable set variables usage Source: https://octopus.com/docs/octopus-rest-api/examples/variables/find-variableset-variables-usage.md This script demonstrates how to programmatically find usages of variables from a variable set. It searches in all projects for a reference to each variable, and optionally deployment processes and runbook processes. :::div{.hint} **Limitations:** Please note the limitations with this example: - It's not possible to use the REST API to search through sensitive variable values, as these values will be returned as `null`. - Variables that are referenced inside of any packages included as part of a deployment or runbook are not searched. ::: ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to search - Name of the variable set to use - Boolean value to toggle searching in a project's deployment process - Boolean value to toggle searching in a project's runbook processes - Optional path to export the results to a csv file ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Specify the Space to search in $spaceName = "Default" # Specify the name of the VariableSet to use to find variables usage of $variableSetVariableUsagesToFind = "My-Variable-Set" # Search through Project's Deployment Processes? $searchDeploymentProcesses = $True # Search through Project's Runbook Processes? $searchRunbooksProcesses = $True # Optional: set a path to export to csv $csvExportPath = "" $variableTracking = @() $octopusURL = $octopusURL.TrimEnd('/') # Get space $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} # Get first matching variable set record $libraryVariableSet = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets/all" -Headers $header) | Where-Object {$_.Name -eq $variableSetVariableUsagesToFind} | Select-Object -First 1 # Get variables for variable set $variableSet = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/$($libraryVariableSet.VariableSetId)" -Headers $header) $variables = $variableSet.Variables Write-Host "Looking for usages of variables from variable set '$variableSetVariableUsagesToFind' in space: '$spaceName'" # Get all projects $projects = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/all" -Headers $header # Loop through projects foreach ($project in $projects) { Write-Host "Checking project '$($project.Name)'" # Get project variables $projectVariableSet = Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/variables/$($project.VariableSetId)" -Headers $header # Check to see if there are any project variable values that reference any of the variable set variables. foreach($variable in $variables) { $matchingValueVariables = $projectVariableSet.Variables | Where-Object {$_.Value -like "*$($variable.Name)*"} if($null -ne $matchingValueVariables) { foreach($match in $matchingValueVariables) { $result = [pscustomobject]@{ Project = $project.Name MatchType = "Referenced Project Variable" VariableSetVariable = $variable.Name Context = $match.Name AdditionalContext = $match.Value Property = $null Link = "$octopusURL$($project.Links.Web)/variables" } # Add and de-dupe later $variableTracking += $result } } } # Search Deployment process if configured if($searchDeploymentProcesses -eq $True) { # Get project deployment process $deploymentProcess = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/deploymentprocesses/$($project.DeploymentProcessId)" -Headers $header) # Loop through steps foreach($step in $deploymentProcess.Steps) { $props = $step | Get-Member | Where-Object {$_.MemberType -eq "NoteProperty"} foreach($prop in $props) { $propName = $prop.Name $json = $step.$propName | ConvertTo-Json -Compress # Check to see if any of the variable set variables are referenced in this step's properties foreach($variable in $variables) { if($null -ne $json -and ($json -like "*$($variable.Name)*")) { $result = [pscustomobject]@{ Project = $project.Name MatchType= "Step" VariableSetVariable = $variable.Name Context = $step.Name AdditionalContext = $null Property = $propName Link = "$octopusURL$($project.Links.Web)/deployments/process/steps?actionId=$($step.Actions[0].Id)" } # Add and de-dupe later $variableTracking += $result } } } } } # Search Runbook processes if configured if($searchRunbooksProcesses -eq $True) { # Get project runbooks $runbooks = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/projects/$($project.Id)/runbooks?skip=0&take=5000" -Headers $header) # Loop through each runbook foreach($runbook in $runbooks.Items) { # Get runbook process $runbookProcess = (Invoke-RestMethod -Method Get -Uri "$octopusURL$($runbook.Links.RunbookProcesses)" -Headers $header) # Loop through steps foreach($step in $runbookProcess.Steps) { $props = $step | Get-Member | Where-Object {$_.MemberType -eq "NoteProperty"} foreach($prop in $props) { $propName = $prop.Name $json = $step.$propName | ConvertTo-Json -Compress # Check to see if any of the variable set variables are referenced in this runbook step's properties foreach($variable in $variables) { if($null -ne $json -and ($json -like "*$($variable.Name)*")) { $result = [pscustomobject]@{ Project = $project.Name MatchType= "Runbook Step" VariableSetVariable = $variable.Name Context = $runbook.Name AdditionalContext = $step.Name Property = $propName Link = "$octopusURL$($project.Links.Web)/operations/runbooks/$($runbook.Id)/process/$($runbook.RunbookProcessId)/steps?actionId=$($step.Actions[0].Id)" } # Add and de-dupe later $variableTracking += $result } } } } } } } # De-dupe $variableTracking = @($variableTracking | Sort-Object -Property * -Unique) if($variableTracking.Count -gt 0) { Write-Host "" Write-Host "Found $($variableTracking.Count) results:" if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $variableTracking | Export-Csv -Path $csvExportPath -NoTypeInformation } } ```
PowerShell (Octopus.Client) ```powershell $ErrorActionPreference = "Stop"; # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $searchDeploymentProcesses = $true $searchRunbookProcesses = $true $csvExportPath = "path:\to\variable.csv" # Specify the name of the VariableSet to use to find variables usage of $variableSetVariableUsagesToFind = "My-Variable-Set" # Search through Project's Deployment Processes? $searchDeploymentProcesses = $True # Search through Project's Runbook Processes? $searchRunbooksProcesses = $True $variableTracking = @() $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) # Get first matching variable set record $libraryVariableSet = $repositoryForSpace.LibraryVariableSets.FindByName($variableSetVariableUsagesToFind) # Get variables for variable set $variableSet = $repositoryForSpace.VariableSets.Get($libraryVariableSet.VariableSetId) $variables = $variableSet.Variables Write-Host "Looking for usages of variables from variable set '$variableSetVariableUsagesToFind' in space: '$spaceName'" # Get all projects $projects = $repositoryForSpace.Projects.GetAll() # Loop through projects foreach ($project in $projects) { Write-Host "Checking project '$($project.Name)'" # Get project variables $projectVariableSet = $repositoryForSpace.VariableSets.Get($project.VariableSetId) # Check to see if there are any project variable values that reference any of the variable set variables. foreach($variable in $variables) { $matchingValueVariables = $projectVariableSet.Variables | Where-Object {$_.Value -like "*$($variable.Name)*"} if($null -ne $matchingValueVariables) { foreach($match in $matchingValueVariables) { $result = [pscustomobject]@{ Project = $project.Name MatchType = "Referenced Project Variable" VariableSetVariable = $variable.Name Context = $match.Name AdditionalContext = $match.Value Property = $null Link = "$octopusURL$($project.Links.Web)/variables" } # Add and de-dupe later $variableTracking += $result } } } # Search Deployment process if configured if($searchDeploymentProcesses -eq $True -and $project.IsVersionControlled -ne $true) { # Get project deployment process $deploymentProcess = $repositoryForSpace.DeploymentProcesses.Get($project.DeploymentProcessId) # Loop through steps foreach($step in $deploymentProcess.Steps) { foreach ($action in $step.Actions) { foreach ($property in $action.Properties.Keys) { foreach ($variable in $variables) { if ($action.Properties[$property].Value -like "*$($variable.Name)*") { $result = [pscustomobject]@{ Project = $project.Name MatchType= "Step" VariableSetVariable = $variable.Name Context = $step.Name AdditionalContext = $null Property = $propName Link = "$octopusURL$($project.Links.Web)/deployments/process/steps?actionId=$($step.Actions[0].Id)" } # Add and de-dupe later $variableTracking += $result } } } } } } # Search Runbook processes if configured if($searchRunbooksProcesses -eq $True) { # Get project runbooks $runbooks = $repositoryForSpace.Projects.GetAllRunbooks($project) # Loop through each runbook foreach($runbook in $runbooks) { # Get runbook process $runbookProcess = $repositoryForSpace.RunbookProcesses.Get($runbook.RunbookProcessId) # Loop through steps foreach($step in $runbookProcess.Steps) { foreach ($action in $step.Actions) { foreach ($property in $action.Properties.Keys) { foreach ($variable in $variables) { if ($action.Properties[$property].Value -like "*$($variable.Name)*") { $result = [pscustomobject]@{ Project = $project.Name MatchType= "Runbook Step" VariableSetVariable = $variable.Name Context = $runbook.Name AdditionalContext = $step.Name Property = $propName Link = "$octopusURL$($project.Links.Web)/operations/runbooks/$($runbook.Id)/process/$($runbook.RunbookProcessId)/steps?actionId=$($step.Actions[0].Id)" } # Add and de-dupe later $variableTracking += $result } } } } } } } } # De-dupe $variableTracking = @($variableTracking | Sort-Object -Property * -Unique) if($variableTracking.Count -gt 0) { Write-Host "" Write-Host "Found $($variableTracking.Count) results:" if (![string]::IsNullOrWhiteSpace($csvExportPath)) { Write-Host "Exporting results to CSV file: $csvExportPath" $variableTracking | Export-Csv -Path $csvExportPath -NoTypeInformation } } ```
C# ```csharp // If using .net Core, be sure to add the NuGet package of System.Security.Permissions #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; class VariableResult { // Define private variables public string Project { get; set; } public string MatchType { get; set; } public string Context { get; set; } public string Property { get; set; } public string AdditionalContext { get; set; } public string Link { get; set; } public string VariableSetVariable { get;set; } } var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; var spaceName = "Default"; string variableSetVariableUsagesToFind = "My-Variable-Set"; bool searchDeploymentProcess = true; bool searchRunbookProcess = true; string csvExportPath = "path:\\to\\variable.csv"; System.Collections.Generic.List variableTracking = new System.Collections.Generic.List(); // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); // Get space repository var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); // Get variable set var librarySet = repositoryForSpace.LibraryVariableSets.FindByName(variableSetVariableUsagesToFind); // Get variables var variableSet = repositoryForSpace.VariableSets.Get(librarySet.VariableSetId); var variables = variableSet.Variables; Console.WriteLine(string.Format("Looking for usages of variables from variable set {0} in space {1}", variableSetVariableUsagesToFind, space.Name)); // Get all projects var projects = repositoryForSpace.Projects.GetAll(); // Loop through projects foreach (var project in projects) { Console.WriteLine(string.Format("Checking {0}", project.Name)); // Get the project variable set var projectVariableSet = repositoryForSpace.VariableSets.Get(project.VariableSetId); // Loop through variables foreach (var variable in variables) { var matchingValueVariables = projectVariableSet.Variables.Where(v => v.Value != null && v.Value.ToLower().Contains(variable.Name.ToLower())); if (matchingValueVariables != null) { foreach (var match in matchingValueVariables) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Referenced Project Variable"; result.VariableSetVariable = variable.Name; result.Context = match.Name; result.Property = null; result.AdditionalContext = match.Value; result.Link = project.Links["Variables"]; //if (!variableTracking.Contains(result)) if (!variableTracking.Any(r => r.Project == result.Project && r.MatchType == result.MatchType && r.VariableSetVariable == result.VariableSetVariable && r.Context == result.Context && r.Property == result.Property && r.AdditionalContext == result.AdditionalContext && r.Link == result.Link)) { variableTracking.Add(result); } } } } if (searchDeploymentProcess) { if (!project.IsVersionControlled) { // Get deployment process var deploymentProcess = repositoryForSpace.DeploymentProcesses.Get(project.DeploymentProcessId); // Loop through steps foreach (var step in deploymentProcess.Steps) { // Loop through actions foreach (var action in step.Actions) { // Loop through properties foreach (var property in action.Properties.Keys) { // Loop through variables foreach (var variable in variables) { if (action.Properties[property].Value != null && action.Properties[property].Value.ToLower().Contains(variable.Name.ToLower())) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Step"; result.VariableSetVariable = variable.Name; result.Context = step.Name; result.Property = property; result.AdditionalContext = null; result.Link = string.Format("{0}{1}/deployments/process/steps?actionid={2}", octopusURL, project.Links["Web"], action.Id); //if (!variableTracking.Contains(result)) if (!variableTracking.Any(r => r.Project == result.Project && r.MatchType == result.MatchType && r.VariableSetVariable == result.VariableSetVariable && r.Context == result.Context && r.Property == result.Property && r.AdditionalContext == result.AdditionalContext && r.Link == result.Link)) { variableTracking.Add(result); } } } } } } } else { Console.WriteLine(string.Format("{0} is version controlled, skipping searching the deployment process.", project.Name)); } } if (searchRunbookProcess) { // Get project runbooks var runbooks = repositoryForSpace.Projects.GetAllRunbooks(project); // Loop through runbooks foreach (var runbook in runbooks) { // Get runbook process var runbookProcess = repositoryForSpace.RunbookProcesses.Get(runbook.RunbookProcessId); // Loop through steps foreach (var step in runbookProcess.Steps) { foreach (var action in step.Actions) { foreach (var property in action.Properties.Keys) { foreach (var variable in variables) { if (action.Properties[property].Value != null && action.Properties[property].Value.ToLower().Contains(variable.Name.ToLower())) { VariableResult result = new VariableResult(); result.Project = project.Name; result.MatchType = "Runbook Step"; result.VariableSetVariable = variable.Name; result.Context = runbook.Name; result.Property = property; result.AdditionalContext = step.Name; result.Link = string.Format("{0}{1}/operations/runbooks/{2}/process/{3}/steps?actionId={4}", octopusURL, project.Links["Web"], runbook.Id, runbookProcess.Id, action.Id); //if (!variableTracking.Contains(result)) if (!variableTracking.Any(r => r.Project == result.Project && r.MatchType == result.MatchType && r.VariableSetVariable == result.VariableSetVariable && r.Context == result.Context && r.Property == result.Property && r.AdditionalContext == result.AdditionalContext && r.Link == result.Link)) { variableTracking.Add(result); } } } } } } } } } Console.WriteLine(string.Format("Found {0} results", variableTracking.Count.ToString())); if (variableTracking.Count > 0) { foreach (var result in variableTracking) { System.Collections.Generic.List header = new System.Collections.Generic.List(); System.Collections.Generic.List row = new System.Collections.Generic.List(); var isFirstRow = variableTracking.IndexOf(result) == 0; var properties = result.GetType().GetProperties(); foreach (var property in properties) { Console.WriteLine(string.Format("{0}: {1}", property.Name, property.GetValue(result))); if (isFirstRow) { header.Add(property.Name); } row.Add((property.GetValue(result) == null ? string.Empty : property.GetValue(result).ToString())); } if (!string.IsNullOrWhiteSpace(csvExportPath)) { using (System.IO.StreamWriter csvFile = new System.IO.StreamWriter(csvExportPath, true)) { if (isFirstRow) { // Write header csvFile.WriteLine(string.Join(",", header.ToArray())); } // Write result csvFile.WriteLine(string.Join(",", row.ToArray())); } } } } ```
Python3 ```python import json import requests import csv octopus_server_uri = 'https://your-octopus-url/api' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} def get_octopus_resource(uri): response = requests.get(uri, headers=headers) response.raise_for_status() return json.loads(response.content.decode('utf-8')) def get_by_name(uri, name): resources = get_octopus_resource(uri) return next((x for x in resources if x['Name'] == name), None) # Specify the Space to search in space_name = 'Default' # Specify the name of the Library VariableSet to use to find variables usage of library_variable_set_name = 'My-Variable-Set' # Search through Project's Deployment Processes? search_deployment_processes = True # Search through Project's Runbook Processes? search_runbook_processes = True # Optional: set a path to export to csv csv_export_path = '' variable_tracker = [] octopus_server_uri = octopus_server_uri.rstrip('/') octopus_server_base_uri = octopus_server_uri.rstrip('api') space = get_by_name('{0}/spaces/all'.format(octopus_server_uri), space_name) library_variable_set_resource = get_by_name('{0}/{1}/libraryvariablesets/all'.format(octopus_server_uri, space['Id']), library_variable_set_name) library_variable_set = get_octopus_resource('{0}/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], library_variable_set_resource['VariableSetId'])) library_variable_set_variables = library_variable_set['Variables'] print('Looking for usages of variables from variable set \'{0}\' in space \'{1}\''.format(library_variable_set_name, space_name)) projects = get_octopus_resource('{0}/{1}/projects/all'.format(octopus_server_uri, space['Id'])) for project in projects: project_name = project['Name'] project_web_uri = project['Links']['Web'].lstrip('/') print('Checking project \'{0}\''.format(project_name)) project_variable_set = get_octopus_resource('{0}/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], project['VariableSetId'])) # Check to see if there are any project variable values that reference any of the library set variables. for library_variable_set_variable in library_variable_set_variables: matching_value_variables = [project_variable for project_variable in project_variable_set['Variables'] if project_variable['Value'] is not None and library_variable_set_variable['Name'] in project_variable['Value']] if matching_value_variables is not None: for matching_variable in matching_value_variables: tracked_variable = { 'Project': project_name, 'MatchType': 'Referenced Project Variable', 'VariableSetVariable': library_variable_set_variable['Name'], 'Context': matching_variable['Name'], 'AdditionalContext': matching_variable['Value'], 'Property': None, 'Link': '{0}{1}/variables'.format(octopus_server_base_uri, project_web_uri) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) # Search Deployment process if enabled if search_deployment_processes == True: deployment_process = get_octopus_resource('{0}/{1}/deploymentprocesses/{2}'.format(octopus_server_uri, space['Id'], project['DeploymentProcessId'])) for step in deployment_process['Steps']: for step_key in step.keys(): step_property_value = str(step[step_key]) # Check to see if any of the variable set variables are referenced in this step's properties for library_variable_set_variable in library_variable_set_variables: if step_property_value is not None and library_variable_set_variable['Name'] in step_property_value: tracked_variable = { 'Project': project_name, 'MatchType': 'Step', 'VariableSetVariable': library_variable_set_variable['Name'], 'Context': step['Name'], 'Property': step_key, 'AdditionalContext': None, 'Link': '{0}{1}/deployments/process/steps?actionId={2}'.format(octopus_server_base_uri, project_web_uri, step['Actions'][0]['Id']) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) # Search Runbook processes if configured if search_runbook_processes == True: runbooks_resource = get_octopus_resource('{0}/{1}/projects/{2}/runbooks?skip=0&take=5000'.format(octopus_server_uri, space['Id'], project['Id'])) runbooks = runbooks_resource['Items'] for runbook in runbooks: runbook_processes_link = runbook['Links']['RunbookProcesses'] runbook_process = get_octopus_resource('{0}/{1}'.format(octopus_server_base_uri, runbook_processes_link)) for step in runbook_process['Steps']: for step_key in step.keys(): step_property_value = str(step[step_key]) # Check to see if any of the variable set variables are referenced in this step's properties for library_variable_set_variable in library_variable_set_variables: if step_property_value is not None and library_variable_set_variable['Name'] in step_property_value: tracked_variable = { 'Project': project_name, 'MatchType': 'Runbook Step', 'VariableSetVariable': library_variable_set_variable['Name'], 'Context': runbook['Name'], 'Property': step_key, 'AdditionalContext': step['Name'], 'Link': '{0}{1}/operations/runbooks/{2}/process/{3}/steps?actionId={4}'.format(octopus_server_base_uri, project_web_uri, runbook['Id'], runbook['RunbookProcessId'], step['Actions'][0]['Id']) } if tracked_variable not in variable_tracker: variable_tracker.append(tracked_variable) results_count = len(variable_tracker) if results_count > 0: print('') print('Found {0} results:'.format(results_count)) for tracked_variable in variable_tracker: print('Project : {0}'.format(tracked_variable['Project'])) print('MatchType : {0}'.format(tracked_variable['MatchType'])) print('VariableSetVariable : {0}'.format(tracked_variable['VariableSetVariable'])) print('Context : {0}'.format(tracked_variable['Context'])) print('AdditionalContext : {0}'.format(tracked_variable['AdditionalContext'])) print('Property : {0}'.format(tracked_variable['Property'])) print('Link : {0}'.format(tracked_variable['Link'])) print('') if csv_export_path: with open(csv_export_path, mode='w') as csv_file: fieldnames = ['Project', 'MatchType', 'VariableSetVariable', 'Context', 'AdditionalContext', 'Property', 'Link'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() for tracked_variable in variable_tracker: writer.writerow(tracked_variable) ```
Go ```go package main import ( "bufio" "fmt" "log" "net/url" "os" "reflect" "strconv" "strings" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) type VariableResult struct { Project string MatchType string Context string Property string AdditionalContext string Link string VariableSetVariable string } func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" variableSetVariableUsagesToFind := "My-Variable-Set" searchDeploymentProcess := true searchRunbookProcess := true csvExportPath := "path:\\to\\variable.csv" // Create client object client := octopusAuth(apiURL, APIKey, "") // Get space space := GetSpace(apiURL, APIKey, spaceName) client = octopusAuth(apiURL, APIKey, space.ID) variableTracking := []VariableResult{} // Get variable set librarySet := GetLibrarySet(apiURL, APIKey, space, variableSetVariableUsagesToFind, 0) // Get the variables variableSet, err := client.Variables.GetAll(librarySet.ID) if err != nil { log.Println(err) } variables := variableSet.Variables fmt.Printf("Looking for usages of variables from variable set %[1]s in space %[2]s \n", variableSetVariableUsagesToFind, space.Name) // Get projects projects, err := client.Projects.GetAll() if err != nil { log.Println(err) } // Loop through projects for _, project := range projects { fmt.Printf("Checking %[1]s \n", project.Name) // Get variables projectVariables, err := client.Variables.GetAll(project.ID) if err != nil { log.Println(err) } // Loop through variables for _, variable := range variables { for _, projectVariable := range projectVariables.Variables { valueMatch := strings.Contains(projectVariable.Value, variable.Name) if valueMatch { result := VariableResult{} result.Project = project.Name result.MatchType = "Referenced Project Variable" result.VariableSetVariable = variable.Name result.Context = projectVariable.Name result.AdditionalContext = projectVariable.Value result.Property = "" result.Link = apiURL.String() + project.Links["Web"] + "/variables" if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } if searchDeploymentProcess { if !project.IsVersionControlled { // Get deployment process deploymentProcess, err := client.DeploymentProcesses.GetByID(project.DeploymentProcessID) if err != nil { log.Println(err) } for _, step := range deploymentProcess.Steps { for _, action := range step.Actions { for property := range action.Properties { for _, variable := range variables { if strings.Contains(action.Properties[property].Value, variable.Name) { result := VariableResult{} result.Project = project.Name result.MatchType = "Step" result.VariableSetVariable = variable.Name result.Context = step.Name result.AdditionalContext = "" result.Property = property result.Link = apiURL.String() + project.Links["Web"] + "/deployments/process/steps?actionId=" + action.ID if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } } } } else { fmt.Printf("%[1]s is version controlled, skipping searching deployment process", project.Name) } } if searchRunbookProcess { // Get project runbooks runbooks := GetRunbooks(client, project) // Loop through runbooks for _, runbook := range runbooks { // Get runbook process runbookProcess, err := client.RunbookProcesses.GetByID(runbook.RunbookProcessID) if err != nil { log.Println(err) } for _, step := range runbookProcess.Steps { for _, action := range step.Actions { for property := range action.Properties { for _, variable := range variables { if strings.Contains(action.Properties[property].Value, variable.Name) { result := VariableResult{} result.Project = project.Name result.MatchType = "Runbook Step" result.VariableSetVariable = variable.Name result.Context = runbook.Name result.AdditionalContext = step.Name result.Property = property result.Link = apiURL.String() + project.Links["Web"] + "/operations/runbooks/" + runbook.ID + "/process/" + runbook.RunbookProcessID + "/steps/actionId=" + action.ID if !arrayContains(variableTracking, result) { variableTracking = append(variableTracking, result) } } } } } } } } } if len(variableTracking) > 0 { fmt.Printf("Found %[1]s results \n", strconv.Itoa(len(variableTracking))) for i := 0; i < len(variableTracking); i++ { row := []string{} header := []string{} isFirstRow := false if i == 0 { isFirstRow = true } e := reflect.ValueOf(&variableTracking[i]).Elem() for j := 0; j < e.NumField(); j++ { if isFirstRow { header = append(header, e.Type().Field(j).Name) } row = append(row, e.Field(j).Interface().(string)) } if csvExportPath != "" { file, err := os.OpenFile(csvExportPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) if err != nil { log.Println(err) } dataWriter := bufio.NewWriter(file) if isFirstRow { dataWriter.WriteString(strings.Join(header, ",") + "\n") } dataWriter.WriteString(strings.Join(row, ",") + "\n") dataWriter.Flush() file.Close() } } } } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func arrayContains(array []VariableResult, result VariableResult) bool { for _, v := range array { if v == result { return true } } return false } func GetRunbooks(client *octopusdeploy.Client, project *octopusdeploy.Project) []*octopusdeploy.Runbook { // Get runbook runbooks, err := client.Runbooks.GetAll() projectRunbooks := []*octopusdeploy.Runbook{} if err != nil { log.Println(err) } for i := 0; i < len(runbooks); i++ { if runbooks[i].ProjectID == project.ID { projectRunbooks = append(projectRunbooks, runbooks[i]) } } return projectRunbooks } func GetLibrarySet(octopusURL *url.URL, APIKey string, space *octopusdeploy.Space, librarySetName string, skip int) *octopusdeploy.LibraryVariableSet { // Create client client := octopusAuth(octopusURL, APIKey, space.ID) librarySetsQuery := octopusdeploy.LibraryVariablesQuery { PartialName: librarySetName, } librarySets, err := client.LibraryVariableSets.Get(librarySetsQuery) if err != nil { log.Println(err) } if len(librarySets.Items) == librarySets.ItemsPerPage { // call again librarySet := GetLibrarySet(octopusURL, APIKey, space, librarySetName, (skip + len(librarySets.Items))) if librarySet != nil { return librarySet } } else { // Loop through returned items for _, librarySet := range librarySets.Items { if librarySet.Name == LifecycleName { return librarySet } } } return nil } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } ```
# Update variable set variable value Source: https://octopus.com/docs/octopus-rest-api/examples/variables/update-variable-set-variable-value.md This script demonstrates how to programmatically update a matching variable value stored in a variable set. Note: This script does not alter the variable scopes, only the value. ## Usage Provide values for the following: - Octopus URL - Octopus API Key - Name of the space to search - Name of the variable set to use - Variable name to search for - New variable value to replace existing value ## Script
PowerShell (REST API) ```powershell $ErrorActionPreference = "Stop"; # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $header = @{ "X-Octopus-ApiKey" = $octopusAPIKey } # Specify the Space to search in $spaceName = "" # Variable Set $libraryVariableSetName = "" # Variable name to search for $VariableName = "" # New variable value to set $VariableValue = "" $space = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/spaces/all" -Headers $header) | Where-Object {$_.Name -eq $spaceName} Write-Host "Looking for variable set '$libraryVariableSet'" $LibraryVariableSets = (Invoke-RestMethod -Method Get -Uri "$octopusURL/api/$($space.Id)/libraryvariablesets?contentType=Variables" -Headers $header) $LibraryVariableSet = $LibraryVariableSets.Items | Where-Object { $_.Name -eq $libraryVariableSetName } if ($null -eq $libraryVariableSet) { Write-Warning "Variable set not found with name '$libraryVariableSetName'." exit } $LibraryVariableSetVariables = (Invoke-RestMethod -Method Get -Uri "$OctopusURL/api/$($Space.Id)/variables/$($LibraryVariableSet.VariableSetId)" -Headers $Header) for($i=0; $i -lt $LibraryVariableSetVariables.Variables.Length; $i++) { $existingVariable = $LibraryVariableSetVariables.Variables[$i]; if($existingVariable.Name -eq $VariableName) { Write-Host "Found existing variable, updating its value" $existingVariable.Value = $VariableValue } } $existingVariable = $LibraryVariableSetVariables.Variables | Where-Object {$_.name -eq $VariableName} | Select-Object -First 1 $UpdatedLibraryVariableSet = Invoke-RestMethod -Method Put -Uri "$OctopusURL/api/$($Space.Id)/variables/$($LibraryVariableSetVariables.Id)" -Headers $Header -Body ($LibraryVariableSetVariables | ConvertTo-Json -Depth 10) ```
PowerShell (Octopus.Client) ```powershell $ErrorActionPreference = "Stop"; # Load assembly Add-Type -Path 'path:\to\Octopus.Client.dll' # Define working variables $octopusURL = "https://your-octopus-url" $octopusAPIKey = "API-YOUR-KEY" $spaceName = "Default" $libraryVariableSetName = "MyLibraryVariableSet" $variableName = "MyVariable" $variableValue = "MyValue" $endpoint = New-Object Octopus.Client.OctopusServerEndpoint($octopusURL, $octopusAPIKey) $repository = New-Object Octopus.Client.OctopusRepository($endpoint) $client = New-Object Octopus.Client.OctopusClient($endpoint) # Get repository specific to space $space = $repository.Spaces.FindByName($spaceName) $repositoryForSpace = $client.ForSpace($space) Write-Host "Looking for variable set '$libraryVariableSetName'" $librarySet = $repositoryForSpace.LibraryVariableSets.FindByName($libraryVariableSetName) # Check to see if something was returned if ($null -eq $librarySet) { Write-Warning "Variable not found with name '$libraryVariableSetName'" exit } # Get the variable set $variableSet = $repositoryForSpace.VariableSets.Get($librarySet.VariableSetId) # Get the variable ($variableSet.Variables | Where-Object {$_.Name -eq $variableName}).Value = $variableValue # Update $repositoryForSpace.VariableSets.Modify($variableSet) ```
C# ```csharp #r "nuget: Octopus.Client" using Octopus.Client; using Octopus.Client.Model; using System.Linq; var octopusURL = "https://your-octopus-url"; var octopusAPIKey = "API-YOUR-KEY"; // Create repository object var endpoint = new OctopusServerEndpoint(octopusURL, octopusAPIKey); var repository = new OctopusRepository(endpoint); var client = new OctopusClient(endpoint); var spaceName = "Default"; string libraryVariableSetName = "MyLibraryVariableSet"; string variableName = "MyVariable"; string variableValue = "MyValue"; var space = repository.Spaces.FindByName(spaceName); var repositoryForSpace = client.ForSpace(space); Console.WriteLine(string.Format("Looking for variable set '{0}'", libraryVariableSetName)); var librarySet = repositoryForSpace.LibraryVariableSets.FindByName(libraryVariableSetName); if (null == librarySet) { throw new Exception(string.Format("Variable Set not found with name '{0}'", libraryVariableSetName)); } // Get the variable set var variableSet = repositoryForSpace.VariableSets.Get(librarySet.VariableSetId); // Update the variable variableSet.Variables.FirstOrDefault(v => v.Name == variableName).Value = variableValue; repositoryForSpace.VariableSets.Modify(variableSet); ```
Python3 ```python import json import requests from requests.api import get, head def get_octopus_resource(uri, headers, skip_count = 0): items = [] skip_querystring = "" if '?' in uri: skip_querystring = '&skip=' else: skip_querystring = '?skip=' response = requests.get((uri + skip_querystring + str(skip_count)), headers=headers) response.raise_for_status() # Get results of API call results = json.loads(response.content.decode('utf-8')) # Store results if 'Items' in results.keys(): items += results['Items'] # Check to see if there are more results if (len(results['Items']) > 0) and (len(results['Items']) == results['ItemsPerPage']): skip_count += results['ItemsPerPage'] items += get_octopus_resource(uri, headers, skip_count) else: return results # return results return items octopus_server_uri = 'https://your-octopus-url' octopus_api_key = 'API-YOUR-KEY' headers = {'X-Octopus-ApiKey': octopus_api_key} space_name = "Default" library_variable_set_name = "MyLibraryVariableSet" variable_name = "MyVariable" variable_value = "MyValue" # Get space uri = '{0}/api/spaces'.format(octopus_server_uri) spaces = get_octopus_resource(uri, headers) space = next((x for x in spaces if x['Name'] == space_name), None) print('Looking for variable set "{0}"'.format(library_variable_set_name)) # Get variable set uri = '{0}/api/{1}/libraryvariablesets'.format(octopus_server_uri, space['Id']) library_variable_sets = get_octopus_resource(uri, headers) library_variable_set = next((l for l in library_variable_sets if l['Name'] == library_variable_set_name), None) # Check to see if something was returned if library_variable_set == None: print('Variable Set not found with name "{0}"'.format(library_variable_set_name)) exit # Get the variables uri = '{0}/api/{1}/variables/{2}'.format(octopus_server_uri, space['Id'], library_variable_set['VariableSetId']) library_variables = get_octopus_resource(uri, headers) # Update the variable for variable in library_variables['Variables']: if variable['Name'] == variable_name: variable['Value'] = variable_value break response = requests.put(uri, headers=headers, json=library_variables) response.raise_for_status() ```
Go ```go package main import ( "fmt" "log" "net/url" "github.com/OctopusDeploy/go-octopusdeploy/octopusdeploy" ) func main() { apiURL, err := url.Parse("https://your-octopus-url") if err != nil { log.Println(err) } APIKey := "API-YOUR-KEY" spaceName := "Default" libraryVariableSetName := "MyLibraryVariableSet" variableName := "MyVariable" variableValue := "MyValue" // Get the space object space := GetSpace(apiURL, APIKey, spaceName) // Create client for space client := octopusAuth(apiURL, APIKey, space.ID) fmt.Printf("Looking for variable set '%[1]s", libraryVariableSetName) // Get the variable set librarySet := GetLibrarySet(client, space, libraryVariableSetName, 0) // Get the variable set variableSet, err := client.Variables.GetAll(librarySet.ID) if err != nil { log.Println(err) } // Loop through variables for _, variable := range variableSet.Variables { if variable.Name == variableName { variable.Value = variableValue break } } // Update the set client.Variables.Update(librarySet.ID, variableSet) } func octopusAuth(octopusURL *url.URL, APIKey, space string) *octopusdeploy.Client { client, err := octopusdeploy.NewClient(nil, octopusURL, APIKey, space) if err != nil { log.Println(err) } return client } func GetSpace(octopusURL *url.URL, APIKey string, spaceName string) *octopusdeploy.Space { client := octopusAuth(octopusURL, APIKey, "") spaceQuery := octopusdeploy.SpacesQuery{ Name: spaceName, } // Get specific space object spaces, err := client.Spaces.Get(spaceQuery) if err != nil { log.Println(err) } for _, space := range spaces.Items { if space.Name == spaceName { return space } } return nil } func GetLibrarySet(client *octopusdeploy.Client, space *octopusdeploy.Space, librarySetName string, skip int) *octopusdeploy.LibraryVariableSet { // Create variable sets query librarySetsQuery := octopusdeploy.LibraryVariablesQuery{ PartialName: librarySetName, } // Get variable set librarySets, err := client.LibraryVariableSets.Get(librarySetsQuery) if err != nil { log.Println(err) } // Loop through results if len(librarySets.Items) == librarySets.ItemsPerPage { // Call again librarySet := GetLibrarySet(client, space, librarySetName, (skip + len(librarySets.Items))) if librarySet != nil { return librarySet } } else { for _, librarySet := range librarySets.Items { if librarySet.Name == librarySetName { return librarySet } } } return nil } ```
# Migrator export Source: https://octopus.com/docs/octopus-rest-api/octopus.migrator.exe-command-line/export.md This command exports configuration data to a directory. Usage: ``` Usage: octopus.migrator export [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --directory=VALUE The target directory for the exported data file- s. This directory will be created if it does not already exist. Use the --clean argument to purge an existing directory before exporting the data files. --clean [Optional] Remove all contents of target directory before exporting the data files. This cannot be undone. --password=VALUE Password used to encrypt any sensitive values. This is the password you will use when importing the data into another Octopus Server. --include-tasklogs [Optional] Use this argument to include the task log folder as part of the data export. Default is to ignore task logs. --inline-scripts=VALUE [Optional] Use this argument to choose how inline scripts in your deployment processes will be exported. Valid options for --inline-scripts are CopyToFiles, ExtractToFiles, LeaveInline. Default is CopyToFiles. Or one of the common options: --help Show detailed help for this command ``` # Migrator import Source: https://octopus.com/docs/octopus-rest-api/octopus.migrator.exe-command-line/import.md This command imports data from an **Octopus 3.0**+ export directory. The export must have been made from an Octopus Server running the same release version as the intended import server. Usage: ``` Usage: octopus.migrator import [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --directory=VALUE Directory for imported files --password=VALUE Password for any sensitive values --dry-run Do not commit changes, just print what would have happened --overwrite If a document with the same name already exists, it will be skipped by default. Use --overwrite to force it to be replaced. --force Imports even if there are validation errors (CAUTION: this may put the database in a bad state). --include-tasklogs Include the task log folder as part of the import process --ignore-version-check Imports even if the version of the export isn't compatible with the instance (CAUTION: this may put the database in a bad state). Or one of the common options: --help Show detailed help for this command ``` # migrate Source: https://octopus.com/docs/octopus-rest-api/octopus.migrator.exe-command-line/migrate.md Imports data from an Octopus 2.6 backup **migrate options** ``` Usage: octopus.migrator migrate [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --file=VALUE Octopus 2.6 (.octobak) file --master-key=VALUE Master Key used to decrypt the file --dry-run Do not commit changes, just print what would have happened --maxage=VALUE Ignore historical data older than x days --nologs Do not import the raw server log entries. --onlylogs Only import the raw server log entries. Or one of the common options: --help Show detailed help for this command ``` # Partial export Source: https://octopus.com/docs/octopus-rest-api/octopus.migrator.exe-command-line/partial-export.md This command exports configuration data to a directory filtered by a single project. Usage: ``` Usage: octopus.migrator partial-export [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --directory=VALUE The target directory for the exported data file- s. This directory will be created if it does not already exist. Use the --clean argument to purge an existing directory before exporting the data files. --clean [Optional] Remove all contents of target directory before exporting the data files. This cannot be undone. --password=VALUE Password used to encrypt any sensitive values. This is the password you will use when importing the data into another Octopus Server. --include-tasklogs [Optional] Use this argument to include the task log folder as part of the data export. Default is to ignore task logs. --inline-scripts=VALUE [Optional] Use this argument to choose how inline scripts in your deployment processes will be exported. Valid options for --inline-scripts are CopyToFiles, ExtractToFiles, LeaveInline. Default is CopyToFiles. --projectGroup=VALUE The name of a project group you want to export including all its projects. Specify this argument multiple times to add multiple project groups. --project=VALUE The name of a project you want to export. Specify this argument multiple times to add multiple projects. --releaseVersion=VALUE [Optional] An expression for the releases you want to export. This can be a specific version like --releaseVersion=2.5.0, or a version range like --releaseVersion=2.5.0-3.1.0, or -- releaseVersion=* to export all releases. Where possible semantic version comparison is used, and any matching releases will be exported. Leaving this argument empty is the equivalent to all releases. --ignore-history [Optional] Excludes all historical documents like releases, deployments, deployment related tasks, and auto-deploy history. Use this switch if you want to export the current state of a project without its history. --ignore-deployments [Optional] Excludes deployments, deployment related tasks, and auto-deploy history. Releases are still exported. Use --ignore-history to exclude all historical documents. --ignore-tenants [Optional] Excludes tenants from partial export. --ignore-certificates [Optional] Excludes certificates from partial export. --ignore-machines [Optional] Excludes deployment targets and workers from partial export. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples {#PartialExport-Basicexamples} This will export the project files from *AcmeWebStore* and then spider back through the relevant linked documents in the database and back up *only those that are required in some way* to reproduce that project in its entirety. ```bash Octopus.Migrator.exe partial-export --instance=MyOctopusInstanceName --project=AcmeWebStore --password=5uper5ecret --directory=C:\Temp\AcmeWebStore ``` # Version Source: https://octopus.com/docs/octopus-rest-api/octopus.migrator.exe-command-line/version.md Shows the version information for this release of the Octopus Migrator **version options** ``` Usage: octopus.migrator version [] Where [] is any of: --format=VALUE The format of the output (text,json). Defaults to text. Or one of the common options: --help Show detailed help for this command ``` # Agent Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/agent.md Starts the Tentacle Agent in debug mode. **agent options** ``` Usage: tentacle agent [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --wait=VALUE Delay (ms) before starting --console Don't attempt to run as a service, even if the user is non-interactive Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example starts the default Tentacle in debug and console mode: ``` tentacle agent --console ``` ## Docker container example This example shows hows to run the Tentacle Agent for an instance named `Tentacle` when running in a custom Docker container. This is also how the official [Tentacle container](/docs/infrastructure/deployment-targets/tentacle/octopus-tentacle-container) is launched too. ``` tentacle agent --instance Tentacle --noninteractive ``` :::div{.hint} The `--noninteractive` parameter is required when running in a container, otherwise the Tentacle host would exit immediately after starting. ::: # Check services Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/checkservices.md The checkservices command checks the Octopus Tentacle instances to see if they are running, and start them if they're not. The watchdog command sets up a scheduled task that calls checkservices. **checkservices options** ``` Usage: tentacle checkservices [] Where [] is any of: --instances=VALUE Comma-separated list of instances to check, or * to check all instances Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example checks to see if the `default` instance is running and start it if it's not: ``` Tentacle checkservices --instances="default" ``` # Configure Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/configure.md Sets Tentacle settings such as the port number and thumbprints. **Configure options** ``` Usage: tentacle configure [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --home, --homedir=VALUE Home directory --app, --appdir=VALUE Default directory to deploy applications to --port=VALUE TCP port on which Tentacle should listen to connections --noListen=VALUE Suppress listening on a TCP port (intended for polling Tentacles only) --listenIpAddress=VALUE IP address on which Tentacle should listen. Default: any --trust=VALUE The thumbprint of the Octopus Server to trust --remove-trust=VALUE The thumbprint of the Octopus Server to remove from the trusted list --reset-trust Removes all trusted Octopus Servers Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example removes all trusted Octopus Servers: ``` tentacle configure --reset-trust ``` This example configures the Tentacle to trust the thumbprint from an Octopus Server of `9202C9DCB8C14A62ED9A4C25F9F83DD04CC3CD40`: ``` tentacle configure --trust="9202C9DCB8C14A62ED9A4C25F9F83DD04CC3CD40" ``` This example changes the Tentacle home directory to `NewHome`: Windows: ``` tentacle configure --homedir="c:\NewHome" ``` Linux: ``` Tentacle configure --homedir="/NewHome" ``` # Create instance Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/create-instance.md Registers a new instance of the Tentacle service. **Create instance options** ``` Usage: tentacle create-instance [] Where [] is any of: --instance=VALUE Name of the instance to create --config=VALUE Path to configuration file to create --home=VALUE [Optional] Path to the home directory - defaults to the same directory as the config file Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example creates a new Tentacle instance named `MyNewInstance`: Windows: ``` tentacle create-instance --instance="MyNewInstance" --config="c:\MyNewInstance\MyNewInstance.config" --home="c:\MyNewInstance\Home" ``` Linux: ``` Tentacle create-instance --instance="MyNewInstance" --config="/MyNewInstance/MyNewInstance.config" --home="/MyNewInstance/Home" ``` # Delete instance Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/delete-instance.md Deletes an instance of the Tentacle service. **Delete instance options** ``` Usage: tentacle delete-instance [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example deletes the Tentacle instance `MyNewInstance`: ``` tentacle delete-instance --instance="MyNewInstance" ``` # Deregister from Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/deregister-from.md Deregisters this deployment target from an Octopus Server. **Deregister from options** ``` Usage: tentacle deregister-from [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --server=VALUE The Octopus Server - e.g., 'http://octopus' --apiKey=VALUE Your API key; you can get this from the Octopus web portal -u, --username, --user=VALUE If not using API keys, your username -p, --password=VALUE If not using API keys, your password -m, --multiple Deregister all machines that use the same thumbprint --space=VALUE The space which this machine will be deregistered from, - e.g. 'Finance Department' where Finance Department is the name of an existing space; the default value is the Default space, if one is designated. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example deregisters a Tentacle from the Octopus Server: ``` tentacle deregister-from --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" ``` This example deregisters the instance `MyNewInstance` from the space `MyNewSpace`: ``` tentacle deregister-from --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" --instance="MyNewInstance" --space="MyNewSpace" ``` # Deregister Worker Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/deregister-worker.md Deregisters this Worker from an Octopus Server. **Deregister Worker options** ``` Usage: tentacle deregister-worker [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --server=VALUE The Octopus Server - e.g., 'http://octopus' --apiKey=VALUE Your API key; you can get this from the Octopus web portal -u, --username, --user=VALUE If not using API keys, your username -p, --password=VALUE If not using API keys, your password -m, --multiple Deregister all workers that use the same thumbprint --space=VALUE The space which this worker will be deregistered from, - e.g. 'Finance Department' where Finance Department is the name of an existing space; the default value is the Default space, if one is designated. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example deregisters a worker from the Octopus Server: ``` tentacle deregister-worker --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" ``` This example deregisters the worker instance `MyNewInstance` from space `MyNewSpace`: ``` tentacle deregister-worker --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" --instance="MyNewInstance" --space="MyNewSpace" ``` # Extract Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/extract.md Extracts a NuGet package. **extract options** ``` Usage: tentacle extract [] Where [] is any of: --package=VALUE Package file --destination=VALUE Destination directory Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example extracts a package file to a destination directory: Windows: ``` tentacle extract --package="c:\temp\OctoFX.Web.1.0.20181.124538.nupkg" --destination="c:\temp\octofx" ``` Linux: ``` tentacle extract --package="/tmp/OctoFX.Web.1.0.20181.124538.nupkg" --destination="/tmp/octofx" ``` # Import certificate Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/import-certificate.md Replace the certificate that Tentacle uses to authenticate itself. **Import certificate options** ``` Usage: tentacle import-certificate [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use -r, --from-registry Import the Octopus Tentacle 1.x certificate from the Windows registry -f, --from-file=VALUE Import a certificate from the specified file generated by the new-certificate command or a Personal Information Exchange (PFX) file --pw, --pfx-password=VALUE Personal Information Exchange (PFX) private key password Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example imports a certificate from a .pfx file: :::div{.hint} This command will import the first certificate it finds. If the .pfx file contains the entire certificate chain, it will attempt to load the first one, which is often the certificate for the Certificate Authority, and fail will with an error that it is unable to load the private key. ::: Windows: ``` tentacle import-certificate --from-file="c:\temp\MyCertificate.pfx" --pfx-password="$uper$ecretP@ssw0rd!" ``` Linux: ``` tentacle import-certificate --from-file="/tmp/MyCertificate.pfx" --pfx-password="$uper$ecretP@ssw0rd!" ``` # List all Octopus Tentacle instances Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/list-instances.md Lists all installed Octopus Tentacle instances. **List instances options** ``` Usage: tentacle list-instances [] Where [] is any of: --format=VALUE The format of the output (text,json). Defaults to text. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example lists all Octopus Tentacle instances on the machine: ``` Tentacle list-instances ``` This example lists all Octopus Tentacle instances on the machine in JSON format: ``` Tentacle list-instances --format="JSON" ``` # New certificate Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/new-certificate.md Creates and installs a new certificate for this Tentacle. **New certificate options** ``` Usage: tentacle new-certificate [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use -b, --if-blank Generates a new certificate only if there is none -e, --export-file=VALUE DEPRECATED: Exports a new certificate to the specified file as unprotected base64 text, but does not save it to the Tentacle configuration; for use with the import-certificate command --export-pfx=VALUE Exports the new certificate to the specified file as a password protected pfx, but does not save it to the Tentacle configuration; for use with the import-certificate command --pfx-password=VALUE The password to use for the exported pfx file Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example creates and installs a new certificate for the default Tentacle instance: ``` tentacle new-certificate ``` This example creates, installs, and exports a new certificate for the instance `MyNewInstance`: Windows: ``` tentacle new-certificate --instance="MyNewInstance" --export-pfx="c:\temp\MyNewInstance.pfx" --pfx-password="$uper$ecretP@ssw0rd" ``` Linux: ``` tentacle new-certificate --instance="MyNewInstance" --export-pfx="/tmp/MyNewInstance.pfx" --pfx-password="$uper$ecretP@ssw0rd" ``` # Poll server Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/poll-server.md Configures an Octopus Server that this Tentacle will poll. **Poll server options** ``` Usage: tentacle poll-server [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --server=VALUE The Octopus Server - e.g., 'http://octopus' --apiKey=VALUE Your API key; you can get this from the Octopus web portal -u, --username, --user=VALUE If not using API keys, your username -p, --password=VALUE If not using API keys, your password --server-comms-address=VALUE The comms address on the Octopus Server; the address of the Octopus Server will be used if omitted. --server-comms-port=VALUE The comms port on the Octopus Server; the default is 10943. If specified, this will take precedence over any port number in server-comms- address. --server-web-socket=VALUE When using active communication over websockets, the address of the Octopus Server, eg 'wss://example.com/OctopusComms'. Refer to [http://g.octopushq.com/WebSocketComms](http://g.octopushq.com/WebSocketComms) Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example configures the Octopus Server that the polling Tentacle polls: ``` tentacle poll-server --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" ``` # Polling proxy Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/polling-proxy.md Configure the HTTP proxy used by Polling Tentacles to reach the Octopus Server **Polling proxy options** ``` Usage: tentacle polling-proxy [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --proxyEnable=VALUE Whether to use a proxy --proxyUsername=VALUE Username to use when authenticating with the proxy --proxyPassword=VALUE Password to use when authenticating with the proxy --proxyHost=VALUE The proxy host to use. Leave empty to use the default Internet Explorer proxy --proxyPort=VALUE The proxy port to use in conjunction with the Host set with proxyHost Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example configures the polling Tentacle to use the default Internet Explorer proxy: ``` tentacle polling-proxy --proxyHost="" --proxyEnable="true" ``` This example disables the proxy server for the polling Tentacle instance `MyNewInstance`: ``` tentacle polling-proxy --proxyEnable="false" --instance="MyNewInstance" ``` # Proxy Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/proxy.md Configure the HTTP proxy used by Octopus. **Proxy options** ``` Usage: tentacle proxy [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --proxyEnable=VALUE Whether to use a proxy --proxyUsername=VALUE Username to use when authenticating with the proxy --proxyPassword=VALUE Password to use when authenticating with the proxy --proxyHost=VALUE The proxy host to use. Leave empty to use the default Internet Explorer proxy --proxyPort=VALUE The proxy port to use in conjunction with the Host set with proxyHost Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example configures the proxy server for a listening Tentacle: ``` tentacle proxy --proxyHost="" --proxyEnable="true" ``` This example disables the proxy server for the instance `MyNewInstance`: ``` tentacle proxy --proxyEnable="false" --instance="MyNewInstance" ``` # Register with Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/register-with.md Registers this machine as a deployment target with an Octopus Server. **Register with options** ``` Usage: tentacle register-with [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --server=VALUE The Octopus Server - e.g., 'http://octopus' --apiKey=VALUE Your API key; you can get this from the Octopus web portal -u, --username, --user=VALUE If not using API keys, your username -p, --password=VALUE If not using API keys, your password --name=VALUE Name of the machine when registered; the default is the hostname --policy=VALUE The name of a machine policy that applies to this machine -h, --publicHostName=VALUE An Octopus-accessible DNS name/IP address for this machine; the default is the hostname -f, --force Allow overwriting of existing machines --comms-style=VALUE The communication style to use - either TentacleActive or TentaclePassive; the default is TentaclePassive --proxy=VALUE When using passive communication, the name of a proxy that Octopus should connect to the Tentacle through - e.g., 'Proxy ABC' where the proxy name is already configured in Octopus; the default is to connect to the machine directly --space=VALUE The name of the space within which this command will be executed. E.g. 'Finance Department' where Finance Department is the name of an existing space. The default space will be used if omitted. --server-comms-port=VALUE When using active communication, the comms port on the Octopus Server; the default is 10943. If specified, this will take precedence over any port number in server-comms-address. --server-comms-address=VALUE When using active communication, the comms address on the Octopus Server; the address of the Octopus Server will be used if omitted. --server-web-socket=VALUE When using active communication over websockets, the address of the Octopus Server, eg 'wss://example.com/OctopusComms'. Refer to [http://g.octopushq.com/WebSocketComms](http://g.octopushq.com/WebSocketComms) --tentacle-comms-port=VALUE When using passive communication, the comms port that the Octopus Server is instructed to call back on to reach this machine; defaults to the configured listening port --env, --environment=VALUE The environment name to add the machine to - e.- g., 'Production'; specify this argument multiple times to add multiple environments -r, --role=VALUE The machine role that the machine will assume - e.g., 'web-server'; specify this argument multiple times to add multiple roles --tenant=VALUE A tenant who the machine will be connected to; specify this argument multiple times to add multiple tenants --tenanttag=VALUE A tenant tag which the machine will be tagged with - e.g., 'CustomerType/VIP'; specify this argument multiple times to add multiple tenant tags --tenanted-deployment-participation=VALUE How the machine should participate in tenanted deployments. Allowed values are Untenanted, TenantedOrUntenanted and Tenanted. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example registers a listening Tentacle to the Octopus Server with the `Development` environment and `OctoFX-Web` role: ``` tentacle register-with --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" --environment="Development" --role="OctoFX-Web" ``` This example registers a polling Tentacle with the `Development` environment and `OctoFX-Web` role in the `OctoFX` space: ``` tentacle register-with --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" --environment="Development" --role="OctoFX-Web" --space="OctoFX" --comms-style="TentacleActive" ``` # Register Worker Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/register-worker.md Registers this machine as a Worker with an Octopus Server. **Register with options** ``` Usage: tentacle register-worker [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --server=VALUE The Octopus Server - e.g., 'http://octopus' --apiKey=VALUE Your API key; you can get this from the Octopus web portal -u, --username, --user=VALUE If not using API keys, your username -p, --password=VALUE If not using API keys, your password --name=VALUE Name of the machine when registered; the default is the hostname --policy=VALUE The name of a machine policy that applies to this machine -h, --publicHostName=VALUE An Octopus-accessible DNS name/IP address for this machine; the default is the hostname -f, --force Allow overwriting of existing machines --comms-style=VALUE The communication style to use - either TentacleActive or TentaclePassive; the default is TentaclePassive --proxy=VALUE When using passive communication, the name of a proxy that Octopus should connect to the Tentacle through - e.g., 'Proxy ABC' where the proxy name is already configured in Octopus; the default is to connect to the machine directly --space=VALUE The name of the space within which this command will be executed. E.g. 'Finance Department' where Finance Department is the name of an existing space. The default space will be used if omitted. --server-comms-port=VALUE When using active communication, the comms port on the Octopus Server; the default is 10943. If specified, this will take precedence over any port number in server-comms-address. --server-comms-address=VALUE When using active communication, the comms address on the Octopus Server; the address of the Octopus Server will be used if omitted. --server-web-socket=VALUE When using active communication over websockets, the address of the Octopus Server, eg 'wss://example.com/OctopusComms'. Refer to [http://g.octopushq.com/WebSocketComms](http://g.octopushq.com/WebSocketComms) --tentacle-comms-port=VALUE When using passive communication, the comms port that the Octopus Server is instructed to call back on to reach this machine; defaults to the configured listening port --workerpool=VALUE The worker pool name to add the machine to - e.- g., 'Windows Pool'; specify this argument multiple times to add to multiple pools Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example registers a listening Tentacle to the worker pool `MyWorkerPool`: ``` tentacle register-worker --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" --workerpool="MyWorkerPool" ``` This example registers a polling Tentacle to the worker pool `MyWorkerPool` in the space `MyNewSpace`: ``` tentacle register-worker --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" --workerpool="MyWorkerPool --space="MyNewSpace" --comms-style="TentacleActive" ``` # Server comms Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/server-comms.md Configure how the Tentacle communicates with an Octopus Server. **Server communication options** ``` Usage: tentacle server-comms [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --thumbprint=VALUE The thumbprint of the Octopus Server to configure communication with; if only one Octopus Server is configured, this may be omitted --style=VALUE The communication style to use with the Octopus Server - either TentacleActive or TentaclePassive --host=VALUE When using active communication, the host name of the Octopus Server --port=VALUE When using active communication, the communications port of the Octopus Server; the default is 10943 --web-socket=VALUE When using active communication over websockets, the address of the Octopus Server, eg 'wss://example.com/OctopusComms'. Refer to [http://g.octopushq.com/WebSocketComms](http://g.octopushq.com/WebSocketComms) Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example configures the Tentacle to communicate with the Octopus Server in listening mode: ``` tentacle server-comms --style="TentaclePassive" --thumbprint="3FBFB8E1EE6B1133701190306E2CBBFB39C30C8D" ``` This example configures the Tentacle instance `MyNewInstance` to communicate with the Octopus Server in polling mode: ``` tentacle server-comms --style="TentacleActive" --instance="MyNewInstance" --thumbprint="3FBFB8E1EE6B1133701190306E2CBBFB39C30C8D" --host="https://your-octopus-url" ``` # Start, stop, install, and configure the Tentacle service Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/service.md **Service options** ``` Usage: tentacle service [] Where [] is any of: --start Start the service if it is not already running --stop Stop the service if it is running --restart Restart the service if it is running --reconfigure Reconfigure the service --install Install the service --username, --user=VALUE Username to run the service under (DOMAIN\Username format for Windows). Only used when --install or --reconfigure are used. Can also be passed via an environment variable OCTOPUS_SERVICE_USERNAME. Defaults to 'root' for Systemd services. --uninstall Uninstall the service --password=VALUE Password for the username specified with -- username. Only used when --install or -- reconfigure are used. Can also be passed via an environment variable OCTOPUS_SERVICE_PASSWORD. --dependOn=VALUE --instance=VALUE Name of the instance to use, or * to use all instances Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example stops the default Tentacle service: ``` tentacle service --stop ``` This example restarts the Tentacle service for instance `MyNewInstance`: ``` tentacle service --restart --instance="MyNewInstance" ``` This example uninstalls the Tentacle service for instance `MyNewInstance`: ``` tentacle service --uninstall --instance="MyNewInstance" ``` # Show configuration Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/show-configuration.md Use the show configuration command to output the Tentacle configuration. The configuration is output as JSON. If you pass credentials to the relevant Octopus Server, it will return server side configuration (roles, environments, machine policy and display name) as well. For Tentacles, the server-side configuration includes roles, environments, machine policy, and display name. For Workers, the server-side configuration includes the associated worker pools, machine policy, and display name. **Show configuration options** ``` Usage: tentacle show-configuration [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --file=VALUE Exports the server configuration to a file. If not specified output goes to the console --space=VALUE The space from which the server data configuration will be retrieved for, - e.g. 'Finance Department' where Finance Department is the name of an existing space; the default value is the Default space, if one is designated. --server=VALUE The Octopus Server - e.g., 'http://octopus' --apiKey=VALUE Your API key; you can get this from the Octopus web portal -u, --username, --user=VALUE If not using API keys, your username -p, --password=VALUE If not using API keys, your password Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example displays the configuration of the Tentacle (or Worker) on the machine in JSON format: ``` tentacle show-configuration ``` This example displays the configuration of the Tentacle (or Worker) on the machine, as well as the configuration from the Octopus Server in JSON format: ``` tentacle show-configuration --server="https://your-octopus-url" --apiKey="API-YOUR-KEY" ``` # Show thumbprint of the Octopus Tentacle's certificate Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/show-thumbprint.md Show the thumbprint of the Tentacle's certificate. **New certificate options** ``` Usage: tentacle show-thumbprint [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use -e, --export-file=VALUE Exports the Tentacle thumbprint to a file --thumbprint-only DEPRECATED: Only print out the thumbprint, with no additional text. This switch has been deprecated and will be removed in Octopus 4.0 since it is no longer needed. --format=VALUE The format of the output (text,json). Defaults to text. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example displays the Tentacle thumbprint in the default text format: ``` tentacle show-thumbprint ``` This example displays the Tentacle thumbprint for the instance `MyNewInstance` in JSON format: ``` tentacle show-thumbprint --instance="MyNewInstance" --format="JSON" ``` This example exports the Tentacle thumbprint to a file: Windows: ``` tentacle show-thumbprint --export-file="c:\temp\thumbprint.txt" ``` Linux: ``` tentacle show-thumbprint --export-file="/tmp/thumbprint.txt" ``` # Update trust Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/update-trust.md Replaces the trusted Octopus Server thumbprint of any matching polling or listening registrations with a new thumbprint to trust. **update-trust options** ``` Usage: tentacle update-trust [] Where [] is any of: --instance=VALUE Name of the instance to use --config=VALUE Configuration file to use --oldThumbprint=VALUE The thumbprint of the old Octopus Server to be replaced --newThumbprint=VALUE The thumbprint of the new Octopus Server Or one of the common options: --help Show detailed help for this command ``` ## Basic example This example replaces the trusted thumbprint value `3FAFA8E1EE6A1133701190306E2CBAFA39C30C8D` with the new value `5FAEA8E1EE6A4535701190536E2CBAFA39C30C8F` for any matching instances: ``` Tentacle update-trust --oldThumbprint="3FAFA8E1EE6A1133701190306E2CBAFA39C30C8D" --newThumbprint="5FAEA8E1EE6A4535701190536E2CBAFA39C30C8F" ``` ## Automated update of trust This example will query the Octopus Server endpoint and pull the certificate. If the endpoint's certificate thumbprint is different than the Tentacle it will find the matching Tentacles installed and update them. Recommend setting up a scheduled task to run every 20-30 minutes to check the certificate thumbprint of the server when you are in the process of updating your certificate. ```powershell $octopusURL = "https://samples.octopus.app:10943" #Replace 10943 with 443 for polling Tentacles over websockets $tentacleExe = "C:\Program Files\Octopus Deploy\Tentacle\Tentacle.exe" $logLocation = "C:\OctopusScripts" $logFile = "$logLocation\UpdatePollingCert_Log.Txt" $uri = New-Object System.Uri($octopusURL) if (-Not ($uri.Scheme -eq "https")) { Write-Error "You can only get keys for https addresses" exit 1 } if ((Test-Path $logLocation) -eq $false) { New-Item $logLocation -ItemType Directory } if ((Test-Path $logFile) -eq $false) { New-Item $logFile -ItemType File } function Write-ToLog { param ( $message ) $currentDate = (Get-Date).ToString("HH:mm:ss yyyy/MM/dd") Write-Host "$currentDate $message" Add-Content -value "$currentDate $message" -Path $logFile } $request = [System.Net.HttpWebRequest]::Create($uri) try { #Make the request but ignore (dispose it) the response, since we only care about the service point $request.GetResponse().Dispose() } catch [System.Net.WebException] { if ($_.Exception.Status -eq [System.Net.WebExceptionStatus]::TrustFailure) { #We ignore trust failures, since we only want the certificate, and the service point is still populated at this point } else { Write-ToLog $_.Exception.Message throw } } $servicePoint = $request.ServicePoint $certificate = $servicePoint.Certificate if ($null -eq $certificate) { Write-ToLog "Unable to pull the certificate for $uri" exit 1 } $certInfo = New-Object system.security.cryptography.x509certificates.x509certificate2($certificate) $thumbParts = $certInfo.Thumbprint.ToCharArray() $thumbParts2 = New-Object System.Collections.ArrayList for ($i = 0; $i -lt $thumbParts.Length; $i = $i+2) { [Void]$thumbParts2.Add([string]$thumbParts[$i]+$thumbParts[$i+1]) } $certThumbprint = ([String]::Join(':',$thumbParts2.ToArray([string]))) -replace ":", "" Write-ToLog "The certificate for $OctopusUrl is $certThumbprint" $instanceList = (& $tentacleExe list-instances --format="JSON") | Out-String | ConvertFrom-Json Write-ToLog "Found $($instanceList.length) Tentacle instances" foreach ($instance in $instanceList) { $instanceConfig = (& $tentacleExe show-configuration --instance="$($instance.InstanceName)") | Out-String | ConvertFrom-Json # This should come back as an array, but if there is only one trusted server it won't, force it to be an array. $trustedServers = @($instanceConfig.Tentacle.Communication.TrustedOctopusServers) foreach ($server in $trustedServers) { $currentThumbprint = $server.Thumbprint if ([string]::IsNullOrWhiteSpace($server.Address)) { Write-ToLog "The current server is not a polling Tentacle, moving onto next one." continue } if ($server.Address -notlike "$octopusURL*") { Write-ToLog "The server $($server.Address) does not match $octopusUrl, moving onto next server" continue } if ([string]::IsNullOrWhiteSpace($currentThumbprint)) { Write-ToLog "The server $($server.Address) does not trust anything, adding in the trust." & $tentacleExe service --instance="$($instance.InstanceName)" --stop & $tentacleExe update-trust --oldThumbprint $currentThumbprint --newThumbprint $certThumbprint --instance="$($instance.InstanceName)" & $tentacleExe service --instance="$($instance.InstanceName)" --start } elseif ($currentThumbprint -ne $certThumbprint) { Write-ToLog "The thumbprint has changed from $currentThumbprint to $certThumbprint, updating the Tentacle $($instance.InstanceName)" & $tentacleExe service --instance="$($instance.InstanceName)" --stop & $tentacleExe update-trust --oldThumbprint $currentThumbprint --newThumbprint $certThumbprint --instance="$($instance.InstanceName)" & $tentacleExe service --instance="$($instance.InstanceName)" --start } else { Write-ToLog "The thumbprint for the Tentacle $($instance.InstanceName) is still $certThumbprint" } } } ``` # version Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/version.md Show the Tentacle version information. **version options** ``` Usage: tentacle version [] Where [] is any of: --format=VALUE The format of the output (text,json). Defaults to text. Or one of the common options: --help Show detailed help for this command ``` ## Basic examples This example displays the Tentacle version in the default text format: ``` tentacle version ``` This example displays the Tentacle version in JSON format: ``` tentacle version --format="json" ``` # watchdog Source: https://octopus.com/docs/octopus-rest-api/tentacle.exe-command-line/watchdog.md Configure a scheduled task to monitor the Tentacle service(s). **watchdog options** ``` Usage: tentacle watchdog [] Where [] is any of: --create Create the watchdog task for the given instances --delete Delete the watchdog task for the given instances --interval=VALUE The interval, in minutes, at which that the service(s) should be checked (default: 5) --instances=VALUE Comma separated list of instances to be checked, or * to check all instances (default: *) Or one of the common options: --help Show detailed help for this command ``` ## Basic examples :::div{.warning} **Windows only** These examples apply to Tentacles installed on Windows only. ::: This example creates the watchdog scheduled task for all instances: ``` tentacle watchdog --create --instances=* ``` This example creates the workdog scheduled task for instances `default` and `MyNewInstance`: ``` tentacle watchdog --create --instances="Default,MyNewInstance" ``` This example deletes all watchdog scheduled tasks: ``` tentacle watchdog --delete ``` # Build versions and packaging in Azure DevOps Source: https://octopus.com/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/build-versions-in-team-build.md Correctly versioning the packages you deploy with Octopus Deploy is important so the right version gets deployed at the right time. With Azure DevOps, specifying a package version isn't always straightforward. This guide shows you how best to version your builds and packages in Azure DevOps, when using the recommended [Archive Files](http://go.microsoft.com/fwlink/?LinkId=809083) task. :::div{.hint} Microsoft has renamed Visual Studio Team Foundation Server (TFS) to Azure DevOps Server with the introduction of Azure DevOps Server 2019. The guidance provided in this document applies to supported versions of TFS. For more information about our support for TFS, see [Azure DevOps and TFS Extension Version Compatibility](/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/extension-compatibility). ::: ## Build numbers in Azure DevOps In Azure DevOps, build numbers may be in a format that doesn't represent a valid SemVer number. For example, Microsoft's [Build Number format documentation](https://www.visualstudio.com/en-gb/docs/build/define/general#build-number-format) gives an example: `$(TeamProject)_$(BuildDefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)` will result in a version number like `Fabrikam_CIBuild_main_20090805.2`. While this is a valid Azure DevOps build number, it can cause issues when trying to pack the build output into a NuGet package, a ZIP archive or tarball to be consumed by Octopus Server. ## SemVer Packages used by Octopus must conform to [SemVer 1.0 or 2.0](/docs/packaging-applications/create-packages/versioning) depending on the version of Octopus you're using. The link above explains versioning in detail, but in its simplest form it means two things: 1. Numbered versions in 3 or 4 segments that can be interpreted as `major.minor.patch`, with an optional "prerelease tag" afterwards in the form `-tag`. 2. Versions can be sorted predictably. For example, `1.2.3` is newer than `1.2.0`. As you can see, a package version of `Fabrikam_CIBuild_main_20090805.2` won't be valid will cause issues! ### Setting a SemVer-compliant build number The recommendations below generally rely on the build number itself being SemVer compliant. To do this, you can change the build number format. Our recommended build number format is: `x.y.$(BuildID)` where `x` and `y` are integers. You can change them when you want to bump a version. This format will produce a three-part version number like `1.2.350`. If you have a build for a separate branch, it's a good idea to add the version tag. For example: `x.y.$(BuildID)-feature-1` will produce a version number like `1.2.350-feature1`. Even better, you can use the `$(Build.SourceBranchName)` variable to set it to the branch name. Please refer to [this](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number) documentation on how to set the build number in your pipeline. :::div{.warning} The only downside of this numbering format is the `$(BuildID)` variable _always_ increases, and does so at the Project Collection level. That means it doesn't reset when you increment your major and minor versions, and if you have multiple builds in your Collection, numbers will be skipped. ::: :::div{.hint} Other extensions such as [gitversion](https://github.com/GitTools/GitVersion) can also be used to easily get SemVer compliant build numbers. ::: ## Packaging in Azure DevOps As mentioned above the recommended approach to package your application is to use the [Archive Files](http://go.microsoft.com/fwlink/?LinkId=809083) task. ### Versioning This task does not provide you with a default version number - this is something you have to set yourself as part of the naming of the output file. Clearly it wouldn't be feasible to change this value every time you do a build, so we recommend you make use of the build variables that Azure DevOps provides. Build and Release variables can be found in [the Microsoft Documentation](https://www.visualstudio.com/en-us/docs/build/define/variables), but the more useful ones include: - `$(Build.BuildNumber)` - This is the full build number (see [above](#setting-a-semver-compliant-build-number) for setting an appropriate format). - `$(Build.BuildID)` - This is a unique, incrementing ID at the Project collection level. Every new build will give you a new number. - `$(Build.SourceBranchName)` - this is the last path segment in the name of the branch. For example, a branch of `refs/heads/main` will return `main`. ### Recommendation There are two options we recommend for specifying a version number. 1. Use a combination of static values and variables directly. For example, `1.0.$(Build.BuildID)` will always give you a new version number. 2. Use the full build number, and format that number appropriately as [described above](#setting-a-semver-compliant-build-number). We recommend the second option for a few reasons. First, it's very easy to match the build to the package because they'll have the same number, and secondly, if you have multiple pack steps, you only need to change a single version number. # Azure DevOps and Team Foundation Server Extension Version Compatibility Source: https://octopus.com/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/extension-compatibility.md ## Octopus extension versions There have been four major versions of the Octopus Extension: - [**Version 1.2.x**](https://s3-eu-west-1.amazonaws.com/octopus-downloads/tfs-2015-extension/octopusdeploy.octopus-deploy-build-release-tasks-1.2.28.vsix) - obsolete, but still usable for older versions of TFS and Azure DevOps - [**Version 2.0.199**](https://s3-eu-west-1.amazonaws.com/octopus-downloads/tfs-2015-extension/octopusdeploy.octopus-deploy-build-release-tasks-2.0.199.vsix) - for TFS 2015 Update 2, TFS 2015 Update 3, TFS 2015 Update 4, and TFS 2017 RTM - [**Version 3**](https://octopus-downloads.s3-eu-west-1.amazonaws.com/tfs-2015-extension/octopusdeploy.octopus-deploy-build-release-tasks-3.0.222.vsix) - download for TFS 2017 Update 1 - [**Current Version**](https://marketplace.visualstudio.com/items?itemName=octopusdeploy.octopus-deploy-build-release-tasks) - the current, most recent version of the extension, for Azure DevOps ## Extension compatibility with Azure DevOps/Team Foundation Server The following table shows compatibility between versions of Azure DevOps, TFS, and the Octopus extension | Azure DevOps/TFS Version / Extension Version | 1.2.x | 2.0.199 | 3 | 4 and 5 | 6 | | -------------------------------------------- |:---------:|:-------------:|:-------------:|:-------------:|:-------------:| | Azure DevOps | Supported | Not supported | Supported | Supported | Supported | | TFS 2017 Update 3 | Supported | Supported | Supported | Supported | Not supported | | TFS 2017 Update 2 | Supported | Supported | Supported | Supported | Not supported | | TFS 2017 Update 1 | Supported | Supported | Supported | Not supported | Not supported | | TFS 2017 RTM | Supported | Supported | Not supported | Not supported | Not supported | | TFS 2015 Updates 2,3,4 | Supported | Supported | Not supported | Not supported | Not supported | TFS 2017 Update 1 is technically supported with version 2.0.199 of the extension, but we do not recommended it. Any version older than TFS 2015 Update 2 is not supported by any extension version. ### Build information compatibility {#build-information-compatibility} When passing [build information](/docs/packaging-applications/build-servers/build-information) to Octopus from Azure DevOps, you may encounter issues when trying to use the build link generated by the Azure DevOps extension. Specifically, the build link may return a `404 (Not Found)` error when viewed. The cause for the issue is believed to be the result of a change to the format of the build URL supported by Azure DevOps. The Build information step in the Octopus Azure DevOps extension expects the build to be viewed using a URL like this: `https://my-tfs-server-address/tfs/Projects/MyProject/_build/results?buildId=`. However, affected TFS versions expect the build to be viewed using a different URL like this: `https://my-tfs-server-address/tfs/Projects/MyProject/_build/index?buildId=`. Since the Build information step was created after this change, the only workarounds are to either try to create a URL re-write rule in TFS to display the build using the new URL format, or to update the TFS version. # Installing the Octopus CLI as a capability Source: https://octopus.com/docs/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/install-octopus-cli-capability.md Tasks in the Octopus extension use the [Octopus CLI](/docs/octopus-rest-api/octopus-cli) to execute commands with an instance of Octopus. As a result, the Octopus CLI is required to be installed and available on an agent before subsequent tasks run. There are two ways to fulfill this requirement: 1. Use the tool installer task, **Octopus CLI Installer** as part of a build pipeline definition 2. Install the Octopus CLI into a [self-hosted agent](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents#install) Using the tool installer task **Octopus CLI Installer** in a build pipeline definition is suitable for installing the Octopus CLI just in time for a build. This is required for builds executed on [Microsoft-hosted agents](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted), which do not offer the ability to preload custom software. Alternatively, the Octopus CLI may be installed on a self-hosted agent and expressed as a capability. Once configured, a pipeline may express demands of agents to ensure that the Octopus CLI is available when executing builds. ## Using the Octopus CLI Installer The **Octopus CLI Installer** task downloads and installs the Octopus CLI, making it available to other tasks in a build pipeline definition. It can be added to a definition through the Classic editor of Azure Pipelines or through the YAML pipeline editor. Currently, the Octopus extension ships two versions of the **Octopus CLI Installer** task; version 4 is provided for backward compatibility with older pipeline definitions while version 5 is recommended because it offers additional features. ### Octopus CLI Installer v4 In the Classic editor, version 4 of the **Octopus CLI Installer** task has a required field, `Octopus CLI Version` that is used to specify the version of the Octopus CLI to be installed: :::figure ![Octopus CLI Installer v4 in Azure Pipelines](/docs/img/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/images/octopus-cli-installer-v4.png) ::: The accepted values for this field are: - `embedded`: use the built-in version of the Octopus CLI - `latest`: downloads and installs the latest version of the Octopus CLI - A specific version number of the Octopus CLI to use e.g. `7.4.3556` :::div{.hint} **Wildcards not supported** Please note: Wildcard values are **NOT** supported when providing a specific version of the Octopus CLI to use. ::: The **Octopus CLI Installer** task may be used in a YAML-based build pipeline. Using the YAML pipeline editor, the following snippet will download and install the latest version of the Octopus CLI: ```yaml - task: OctoInstaller@4 displayName: "Octopus CLI Installer" inputs: version: "latest" ``` ### Octopus CLI Installer v5 In the Classic editor, version 5 of the **Octopus CLI Installer** task has a required field, `Octopus CLI Version` that is used to specify the version of the Octopus CLI to be installed: :::figure ![Octopus CLI Installer v5 in Azure Pipelines](/docs/img/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/images/octopus-cli-installer-v5.png) ::: This field accepts a limited set of values, specified as `MAJOR.MINOR.PATCH` with wildcard support that adheres to [Semantic Versioning](https://semver.org/) rules. For example: - `8.*`: install latest minor version for v8 of the Octopus CLI - `7.3.*`: install the latest patch version for v7.3 of the Octopus CLI - `9.0.0`: install the exact version 9.0 of the Octopus CLI - `*`: install the latest version of the Octopus CLI :::div{.hint} **Range operators not supported** Please note: Range and range operators e.g. `~1.2.3` are not supported. ::: The **Octopus CLI Installer** task may be used in a YAML-based build pipeline. Using the YAML pipeline editor, the following snippet will download and install the latest version of the Octopus CLI: ```yaml - task: OctoInstaller@5 displayName: "Octopus CLI Installer" inputs: version: "*" ``` ### Octopus CLI Installer v6 :::div{.warning} Version 6+ of each of the steps no longer require installing the CLI ::: Version 6 of the Octo CLI installer will only install the new [Octopus CLI](https://github.com/OctopusDeploy/cli). ## Using the Octopus CLI with Self-Hosted Agents Self-hosted agents provide the ability to install tools that are required for builds and deployments. They can also improve build performance since their associated configuration is persisted between runs. Self-hosted agents are available for Linux, macOS, or Windows. They may also be used in a Docker container. For more information about installing a self-hosted agent, see: - [macOS agent](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-osx) - [Linux agent](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux) (x64, ARM, ARM64, RHEL6) - [Windows agent](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows) (x64, x86) - [Docker agent](https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/docker) A self-hosted agent must be configured to include the Octopus CLI before using it in a pipeline. Binaries and/or packages for the Octopus CLI can be downloaded from the [Octopus CLI downloads](https://github.com/OctopusDeploy/OctopusCLI/releases) page. :::div{.warning} **Breaking Change in Version 5** Tasks in version 5 of the Octopus extension now assert [demands](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands) for agent capabilities. These tasks now require that self-hosted agents expose the user-defined capability `octo` along with the version of the Octopus CLI installed on the agent (i.e. `8.0.1`). ::: These task demands were introduced and mandated in version 5 to ensure the availability of the Octopus CLI. :::figure ![Self-Hosted Agent User Capability](/docs/img/packaging-applications/build-servers/tfs-azure-devops/using-octopus-extension/images/self-hosted-agent-user-capability.png) ::: If this user-defined capability described above is not defined for self-hosted agents then jobs will fail with the following error: ``` No agent found in pool [POOL-NAME] which satisfies demands: octo ``` Please note that tasks in version 4 (and below) of the Octopus extension do not assert demands for agent capabilities. Therefore, it is not required to specify agent capabilities for these tasks. # Deploy a release step in Octopus Source: https://octopus.com/docs/projects/coordinating-multiple-projects/deploy-release-step.md The _Deploy a Release_ step lets you have a project trigger the deployment of a release of another project. This is useful when you are [coordinating multiple projects](/docs/projects/coordinating-multiple-projects). :::figure ![Deploy release step card](/docs/img/projects/coordinating-multiple-projects/deploy-release-step/deploy-release-card.png) ::: when you add a _Deploy a Release_ step to your deployment process, you can then select the project which will be deployed. :::figure ![Deploy release select project](/docs/img/projects/coordinating-multiple-projects/deploy-release-step/deploy-release-step-select-project.png) ::: You can add many _Deploy a Release_ steps to your process, if you wish to deploy releases of many projects. ## Creating a release When creating a release of a project containing _Deploy a Release_ steps you can select the release version of each project, similar to the way versions of packages are selected: :::figure ![Create release with deploy release steps](/docs/img/projects/coordinating-multiple-projects/deploy-release-step/deploy-release-create-release-screen.png) ::: :::div{.hint} By default, Octopus will select the *latest* release based on the creation time of the release, and **not** the Semantic version. This means Octopus might initially select a release that has a lower version that the latest for a Project. ::: ### Channels The [Channel](/docs/releases/channels) used for any _Deploy a Release_ step is automatically determined by the release version of the project you select in the create release screen, since a channel is chosen when a release is created. :::div{.hint} It's possible to choose child releases from specific channels when using the _Deploy a Release_ step using [package version rules](/docs/releases/channels/#version-rules). Watch our - [Ask Octopus Episode: Deployment Channels with Child Projects](https://www.youtube.com/watch?v=3oLVq1EpUfc) to see it in action. ::: ## Conditional deployment A _Deploy a Release_ step can be configured to: - Deploy Always (default). - Deploy if the selected release is not the current release in the environment. - Deploy if the selected release has a higher version than the current release in the environment. ## Variables Variables can be passed to the deployment triggered by the _Deploy a Release_ step. These will be made available to steps within the child deployment's process, just like regular [project variables](/docs/projects/variables). Variables passed in will override existing variables in the child project if the names collide. :::figure ![Deploy release variables](/docs/img/projects/coordinating-multiple-projects/deploy-release-step/deploy-release-step-variables.png) ::: ### Output variables You may wish to capture information from a deployment triggered by a _Deploy a Release_ step, either to be used in the parent process or to be passed to another deployment via another _Deploy a Release_ step. Any [output variables](/docs/projects/variables/output-variables) generated by a deployment will be captured as output variables on the _Deploy a Release_ step which triggered the deployment. These can then be used by subsequent steps in the process. These output variables are captured as variables with the following name pattern: ``` Octopus.Action[Deploy Release Step Name].Output.Deployment[Child Step Name].VariableName ``` and for [machine-specific output variables](/docs/projects/variables/output-variables/#multiple-target-output): ``` Octopus.Action[Deploy Release Step Name].Output.Deployment[Child Step Name][Machine Name].VariableName ``` Where: *Deploy Release Step Name:* The name of the _Deploy a Release_ step in the parent process. *Child Step Name:* The name of the step in the child deployment process which set the output variable. *VariableName:* The original name of the output variable. e.g. for `Set-OctopusVariable -Name "Foo" -Value "Bar"` this would be `Foo`. *Machine Name:* The machine the child process was targeting when the output variable was set. :::div{.hint} For example, you have a project _Project Voltron_ which contains a _Deploy a Release_ step named _Deploy Red Lion_ which triggers a deployment of another project _Project Red Lion_. _Project Red Lion_ contains a step _Echo Paladin_ which sets an output variable. e.g. ``` Set-OctopusVariable -Name "Paladin" -Value "Lance" ``` This variable will be available in subsequent steps of the _Project Voltron_ process via the variable `Octopus.Action[Deploy Red Lion].Output.Deployment[Echo Paladin].Paladin`. ::: ## Lifecycles The Lifecycles of projects being deployed by a _Deploy a Release_ step must be compatible with the coordinating project. For example, if you have two projects, `Project A` and `Project B` which are referenced by _Deploy a Release_ steps in another project `Project Alphabet`. When deploying `Project Alphabet` to the `Test` environment, the release versions chosen for `Project A` and `Project B` must also be eligible to be deployed to the `Test` environment according to the lifecycles of those projects. ## Multi-tenant deployments When a [tenanted](/docs/tenants) project is being deployed by _Deploy a Release_ step, then the parent project should also be created as tenanted. When triggering a tenanted deployment of the parent project, the tenant will be used to trigger the child deployment. If the child project is untenanted, and the parent project is deployed with a tenant selected, then the untenanted child project will simply be deployed, ignoring the tenant. ### Deploying a combination of tenanted and untenanted projects A project can contain multiple _Deploy a Release_ steps which deploy a combination of tenanted and untenanted projects. There are a number of approaches which can be used to control which _Deploy a Release_ steps will be executed. - Scope the _Deploy a Release_ step to one or more tenants. This is useful if the child project should only be deployed for particular tenants. - If the child project is untenanted, and should only be deployed _once_ for all tenants, then the [Deployment Conditions](#conditional-deployment) can be used to specify that is should only be deployed if the version does not match. This will prevent it from being deployed multiple times if multiple tenanted-deployments of the parent project are created. ## Rolling deployments _Deploy a Release_ steps may be added as child steps, to be used in a [rolling deployment](/docs/deployments/patterns/rolling-deployments-with-octopus). When executing a rolling deployment containing a _Deploy a Release_ step, child deployments will be created per deployment target, as each target is rolled over. i.e. if the rolling step specifies a target role which matches 10 deployment targets, then 10 child deployments will be created. :::div{.hint} When configuring a _Deploy a Release_ step as a child step in a rolling deployment, the [deployment condition](#conditional-deployment) should be set to `Deploy Always`. Otherwise, as the step rolls across multiple machines, it will see the current release as having already been deployed to the environment, and execution will be skipped. ::: ## Canceling a deployment Canceling the deployment of the parent project as it's executing the Deploy a Release step won't cancel the deployment of the child project. The child deployment will continue to completion. # Troubleshooting Schannel and TLS Source: https://octopus.com/docs/security/octopus-tentacle-communication/troubleshooting-schannel-and-tls.md Octopus Server and Tentacle establish secure communication using **TLS with mutual RSA certificate authentication**. TLS configuration is handled by the underlying operating system - **Schannel** on Windows and **OpenSSL** on Linux. This guide helps you diagnose and resolve TLS handshake failures between Octopus Server and Tentacle agents. :::div{.hint} Before troubleshooting, ensure both systems meet the [minimum TLS requirements](/docs/security/octopus-tentacle-communication/minimum-tls-requirements) for protocol versions, cipher suites, and signature algorithms. ::: ## Common Symptoms TLS communication failures typically manifest as: - Connection timeouts or handshake failures in Octopus Server logs and deployment task logs - Tentacle health checks failing with TLS-related errors - Event log errors on Windows mentioning Schannel or certificate validation - OpenSSL handshake errors on Linux systems ## Quick Diagnostic Steps ### 1. Verify Protocol Support Ensure both the Octopus Server and Tentacle support at least **TLS 1.2**. TLS 1.3 is recommended but optional. **Windows (GUI):** [IISCrypto from Nartac Software](https://www.nartac.com/Products/IISCrypto) can be used to easily view and change Schannel settings on Windows. Changing the Schannel configuration always requires a machine restart. **Windows (PowerShell):** ```powershell # Check TLS 1.2 registry settings Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name Enabled -ErrorAction SilentlyContinue Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name Enabled -ErrorAction SilentlyContinue # Check TLS 1.3 registry settings (Windows Server 2022+ / Windows 11+) Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.3\Client' -Name Enabled -ErrorAction SilentlyContinue Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.3\Server' -Name Enabled -ErrorAction SilentlyContinue ``` **Linux:** ```bash # Check OpenSSL TLS support openssl s_client -connect :10943 -tls1_2 openssl s_client -connect :10943 -tls1_3 ``` ### 2. Verify Cipher Suite Compatibility The following cipher suites must be enabled on both systems: - `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256` - `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384` - `TLS_CHACHA20_POLY1305_SHA256` **Windows (PowerShell):** ```powershell # List enabled cipher suites Get-TlsCipherSuite | Select-Object Name | Where-Object { $_.Name -like "*ECDHE*RSA*" } ``` **Linux:** ```bash # Test cipher suite support openssl ciphers -v 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' ``` ### 3. Check Event Logs and Service Logs **Windows Event Viewer:** - Look for Schannel errors in `Applications and Services Logs > Microsoft > Windows > Schannel > Operational` - Common error codes: - **Event ID 36887**: No compatible cipher suite found - **Event ID 36888**: Fatal alert received (handshake failure) - **Event ID 36870**: Certificate validation failure **Octopus Server Logs:** - Check `C:\Octopus\Logs` for TLS handshake errors - Look for certificate trust or validation issues **Tentacle Logs:** - Check `C:\Octopus\` (Windows) or `/etc/octopus/` (Linux) ### 4. Use TentaclePing as a lightweight tool for testing We have built a small utility for testing the communications protocol between two servers called [Tentacle Ping](https://github.com/OctopusDeploy/TentaclePing). This tool helps isolate the source of communication problems without needing a full Octopus configuration. It is built as a simple client and server component that emulates the communications protocol used by Octopus Server and Tentacle. ## Common Issues and Solutions ### Issue: TLS 1.3 Handshake Failures **Symptom:** Connection works with TLS 1.2 but fails with TLS 1.3 enabled. **Cause:** Many TLS 1.3 hardening templates disable PKCS#1 v1.5 signature padding, which Octopus RSA certificates require. **Solution:** Ensure at least one of these RSA signature schemes is enabled: - `rsa_pkcs1_sha256` (PKCS#1 v1.5) - `rsa_pss_rsae_sha256` (RSA-PSS) **Windows (Registry):** ```powershell # Verify signature algorithms (requires admin rights) $path = "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\SignatureAlgorithms" Get-ChildItem $path ``` **Linux (OpenSSL config):** Check `/etc/ssl/openssl.cnf` for signature algorithm restrictions. ### Issue: No Compatible Cipher Suite **Symptom:** Event ID 36887 or handshake failure with "no shared cipher" error. **Cause:** Custom TLS hardening policies have disabled all required cipher suites. **Solution:** Re-enable at least one required cipher suite (see [minimum requirements](/docs/security/octopus-tentacle-communication/minimum-tls-requirements)). **Windows (PowerShell - Admin):** ```powershell # Enable a required cipher suite Enable-TlsCipherSuite -Name "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" ``` **Linux:** Update your OpenSSL or system-wide TLS policy to include required suites. ### Issue: Elliptic Curve Mismatch **Symptom:** ECDHE key exchange failures. **Cause:** Required elliptic curve `secp256r1` (P-256) has been disabled. **Solution:** Ensure `secp256r1` is enabled for ECDHE key exchange. **Windows (PowerShell - Admin):** ```powershell # Enable P-256 curve Enable-TlsEccCurve -Name "NistP256" ``` **Linux:** Verify curve support: ```bash openssl ecparam -list_curves | grep "secp256r1" ``` ### Issue: SHA-1 Server Certificate Rejection **Symptom:** - Tentacles fail to connect to Octopus Server with certificate validation errors - Linux-based Tentacles (especially on modern distributions) cannot establish connections - OpenSSL 3.x systems reject the connection with "unsafe legacy renegotiation" or certificate validation errors - Errors mentioning "SHA-1" or "insecure signature algorithm" in logs **Cause:** Octopus Server generates a self-signed X.509 certificate on first installation, which is stored in the database and persists across upgrades. **Very early Octopus installations** (pre-2017) may still be using SHA-1 signed server certificates. Modern operating systems, especially: - Linux distributions with OpenSSL 3.x (Ubuntu 22.04+, RHEL 9+, Debian 12+) - Hardened Windows systems with strict cryptographic policies - Systems with FIPS mode enabled ...explicitly block SHA-1 certificates as cryptographically insecure and will refuse TLS connections. :::div{.hint} Tentacle certificates are less likely to use SHA-1, as they are regenerated locally on the client machine during the installation process. This issue primarily affects long-running Octopus Server instances with certificates that have never been regenerated. [This API Script](https://github.com/OctopusDeploy/OctopusDeploy-Api/blob/master/REST/PowerShell/Targets/FindSHA1Tentacles.ps1) can be used to check if any Tentacles are communicating with a SHA1 certificate. ::: **Solution:** **You must regenerate your Octopus Server certificate to use SHA-256.** :::div{.warning} **This is critical for security and compatibility.** SHA-1 has been deprecated since 2017 due to collision vulnerabilities. Modern systems will not trust SHA-1 certificates. ::: 1. **Check your Octopus Server certificate signature algorithm:** In the Octopus Web Portal, navigate to **Configuration → Thumbprint**. The certificate details page will show the signature algorithm. If it displays **SHA-1**, you must regenerate the certificate. 2. **Regenerate the Octopus Server certificate with SHA-256:** Follow the [certificate regeneration documentation](/docs/security/octopus-tentacle-communication/regenerate-certificates-with-octopus-server-and-tentacle) to create a new SHA-256 signed certificate for your Octopus Server. 3. **Update Tentacle trust** after certificate regeneration **Note:** Certificate regeneration requires re-establishing trust with all Tentacle agents. Plan this maintenance window accordingly, as all Tentacles will need to be updated with the new Server thumbprint. ### Issue: Legacy Protocol Enforcement **Symptom:** Connection fails after disabling TLS 1.0/1.1. **Cause:** One system still requires an older protocol version. **Solution:** Ensure both Octopus Server and Tentacle are running current versions that support TLS 1.2 or higher. Update any outdated installations. :::div{.warning} TLS 1.0 and 1.1 are deprecated and should not be used. Ensure all systems support at least TLS 1.2. ::: ## Advanced Troubleshooting ### Capture and Analyze TLS Handshakes **Windows - Network Tracing:** 1. Use **Wireshark** or **netsh** to capture traffic between Server and Tentacle: ```powershell netsh trace start capture=yes tracefile=C:\temp\tls-trace.etl # Reproduce the issue netsh trace stop ``` 2. Convert the trace to `.pcap` for analysis in Wireshark 3. Filter for TLS handshake messages to identify failures **Linux - tcpdump:** ```bash sudo tcpdump -i any -w /tmp/tls-capture.pcap host and port 10943 ``` Analyze the capture for: - **Client Hello** and **Server Hello** messages - Cipher suite negotiation - Certificate exchange and validation - Alert messages indicating failure reasons ### Test Direct TLS Connection Use OpenSSL to test raw TLS connectivity: ```bash # Test from Tentacle to Octopus openssl s_client -connect :10943 -tls1_2 -cipher ECDHE-RSA-AES128-GCM-SHA256 # Check certificate details openssl s_client -connect :10943 -showcerts ``` Look for handshake completion and certificate validation success. ## Validating Your Configuration After making changes: 1. **Restart services:** - Restart Octopus Server service - Restart Tentacle service 2. **Test connectivity:** - Run a health check from the Octopus portal - Check task logs for successful connections ## Getting Help If you continue experiencing issues: 1. Collect diagnostic information: - Octopus Server and Tentacle versions - Operating system versions - Enabled TLS protocols and cipher suites - Relevant event log entries and service logs - Network trace (if possible) 2. Contact [Octopus Support](https://octopus.com/support) with this information ## See Also - [Minimum TLS Requirements](/docs/security/octopus-tentacle-communication/minimum-tls-requirements) - [Octopus-Tentacle Communication](/docs/security/octopus-tentacle-communication) - [Security and Encryption](/docs/security) - # Auditing Source: https://octopus.com/docs/security/users-and-teams/auditing.md For team members to collaborate in the deployment of software, there needs to be trust and accountability. Octopus Deploy captures audit information whenever significant events happen in the system. ## What does Octopus capture? Below is a short list of just some of the things that Octopus captures: - Changes to [deployment processes](/docs/deployments/) and [variables](/docs/projects/variables). - Create/modify/delete events for [projects](/docs/projects/), [environments](/docs/infrastructure/environments/), [deployment targets](/docs/infrastructure), releases, and so on. - Environment changes, such as adding new deployment targets or modifying the environment a deployment target belongs to. - Queuing and canceling of deployments and other tasks. Some general points worth noting: - Octopus **does** capture the details of every mutating action (create/edit/delete) including who initiated the action. - Octopus **does** capture login events for specific user accounts, but **not** logout. - Octopus **does not** capture when data is read, however certain sensitive actions like downloading a certificate with its private key is captured. If you are concerned that Octopus does not capture a specific action of interest to you, please contact our [support team](https://octopus.com/support). ## Viewing the audit history You can view the full audit history by navigating to the **Audit** tab in the **Configuration** area. :::figure ![Audit Configuration](/docs/img/security/users-and-teams/auditing/images/audit-configuration.png) ::: Some audit events will also include details, which you can see by clicking the **show details** link. For example: :::figure ![Audit Event Details](/docs/img/security/users-and-teams/auditing/images/audit-event-details.png) ::: ![Audit Event Details extended](/docs/img/security/users-and-teams/auditing/images/audit-event-details-extended.png) This feature makes it extremely easy to see who made what changes on the Octopus Server. ## Security concerns We take great care to ensure the security and integrity of your audit logs, to make sure they are a trustworthy indelible record of every important activity in your Octopus installation. If you have any concerns please [reach out to us](https://octopus.com/support). ### Viewing audit logs To grant a user access to audit logs you can make use of a built-in User Role that contains **EventView**. All project related user roles contain it. **EventView** can also be scoped to narrow down which audit information a user can see, for example, it can be restricted to specific Projects or Environments. Learn more about [managing users and teams](/docs/security/users-and-teams). In **Octopus 2019.1** we removed **AuditView** in an effort to simplify permissions so only **EventView** is now required. ### Streaming audit logs From **Octopus 2022.4** [enterprise-tier](https://octopus.com/pricing) customers have the option to [stream their audit logs](/docs/security/users-and-teams/auditing/audit-stream) to their chosen security information and event management (SIEM) solution. ### Sensitive values in audit logs If you make a change to a sensitive value in Octopus, you will notice we write an audit log showing the fact the sensitive value changed. The value we show in the audit log is simply **an indicator the value has changed**. This is **not** the unencrypted/raw value. This is **not** even the encrypted value. We take the sensitive value and hash it using an irreversible hash algorithm. We then encrypt that hash with a new, unique, non-deterministic salt. We use this irreversible value as an indicator that the sensitive value actually changed in some way. ## Archived audit logs {#archived-audit-events} :::div{.hint} The audit log retention functionality is available from **Octopus 2023.3** onwards. ::: Audit log entries can require a significant amount of database space to store, degrading overall system performance. For this reason, Octopus Server applies a retention policy to automatically archive audit log entries older than the configured number of days and remove them from the database. The retention period can be configured via **Configuration ➜ Settings ➜ Event Retention**. The location of the archived audit log files can be changed via **Configuration ➜ Settings ➜ Server Folders**. Periodically, Octopus will apply the retention policy to existing entries and store them as [JSONL](https://jsonlines.org/) files, grouped as a single file for each day (for example, `events-2019-01-01.jsonl`). Users with appropriate permissions (typically `Octopus Manager`) can download or delete the archived files. The downloaded files are intended to be imported into a data lake for querying and analysis. ### Accessing archived logs {#accessing-archived-logs} Audit entries older than the configured retention period (defaults to 90 days, configurable up to 365 days or 3650 days for self-hosted customer) are archived and can be accessed via the overflow menu (`...`) in the top right corner of the audit page by selecting the **Manage archived audit logs** option. :::figure ![Manage Archived Audit Logs Menu](/docs/img/security/users-and-teams/auditing/images/manage-archived-audit-logs-menu.png) ::: The archived files can also be accessed via the Octopus REST API endpoints `/api/events/archives` and `/api/events/archives/{filename}`. ### Modifying and deleting audit logs is restricted Octopus actively prevents modifying or deleting audit logs within the configured retention period via its API. That said, a user with the appropriate permissions to the `Events` table in your Octopus SQL Database could modify or delete records in that table. If you are concerned about this kind of tampering you should configure the permissions to the `Events` table in your Octopus SQL Database appropriately. Entries older than the retention period can be deleted by users with the appropriate permissions (typically `Octopus Manager`). An audit log entry will be created each time an archived event file is deleted. Archived files are saved at a filesystem level. So any user that has the appropriate permissions could view or delete these files. If this is a concern, you should restrict the permissions to access the configured folder appropriately. :::div{.warning} **Take care deleting archived files** Deleting the archived files will permanently erase the audit entries. As a safeguard, deletion of audit log files is only allowed on files that are at least 30 days old from when they were created. ::: ## IP address forwarding From **Octopus 2023.1**, the originating IP address of a request is recorded as part of any audit event. If you host Octopus on-premises and run multiple nodes in a High Availability setup, incoming requests will be redirected from your load balancer. This means that by default, the IP address recorded with any event will be the IP address of your load balancer. To resolve this, you can configure any trusted IP addresses via **Configuration ➜ Settings ➜ Web Portal ➜ Trusted Proxies**. Octopus accepts any number of trusted proxies. A trusted proxy can either be a single IP address such as `192.168.123.111` or an IP range such as `192.168.0.0/16`. Octopus reads forwarded IP addresses from the `X-Forwarded-For` header. Given the IP address of the client sending the request is trusted, the rightmost IP address that is **not** configured in the list of trusted proxies will be used as the IP address for the event. If all IP addresses are trusted, the leftmost value in the `X-Forwarded-For` header will be used. Some examples include: - If the IP range `0.0.0.0/0` is configured as a trusted proxy, then any request will always use the leftmost IP address found in the `X-Forwarded-For` header, or the IP address of the client if no header is provided - If no trusted proxies are configured, the IP address of the client that sent the request will always be used as it is not considered trusted, even if there is a valid `X-Forwarded-For` header - If the IP address `192.168.123.111` is configured as a trusted proxy, and a request is received with a client IP address of `192.168.123.111` and the header `X-Forwarded-For: 100.100.101.102, 200.123.124.125`, then `200.123.124.125` will be used as the IP address of this request as it is the rightmost untrusted IP address # Audit Stream Source: https://octopus.com/docs/security/users-and-teams/auditing/audit-stream.md Audit streaming provides [enterprise-tier](https://octopus.com/pricing) customers with the ability to stream their audit events to their chosen security information and event management (SIEM) solution. :::div{.hint} Audit streaming is only available from **Octopus 2022.4** onwards. ::: ## Configure Audit Stream You can configure the audit stream from the **Audit** page in the **Configuration** area. Click **Stream Audit Log** to open the configuration dialog. :::figure ![Audit Stream Not Configured](/docs/img/security/users-and-teams/auditing/images/audit-stream-not-configured.png) ::: We currently support streaming to **OpenTelemetry (OTLP)** compatible providers as well as directly to **Splunk** and **Sumo Logic**. :::figure ![Audit Stream Configure Dialog](/docs/img/security/users-and-teams/auditing/images/audit-stream-configure-dialog.png) ::: :::div{.hint} Looking to connect to a SIEM solution that is not currently supported? Let us know in our [feedback form](https://oc.to/AuditStreamFeedbackForm). ::: ### Streaming to OpenTelemetry (OTLP) :::div{.hint} OpenTelemetry support is only available from **Octopus 2024.4.6705** onwards. ::: Refer to your SIEM solution's documentation on how to set up collection via OpenTelemetry. Some providers may support OTLP directly, while others recommend hosting your own [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) and use one of the [exporters](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter) to forward the data to the SIEM. Once you have set up the collector, you will need to provide the connection details in Octopus: - **OpenTelemetry Endpoint URL** - The collection endpoint. In most cases you will need to append `/v1/logs` to the url - **OTLP Protocol** - The protocol to use, `HTTP/protobuf` (also known as `OTLP/HTTP`) or `gRPC` - **Secret** - The authentication token to use, see below - **Header** - Any HTTP headers that are required by the collector There is no standard authentication mechanism for OpenTelemetry, so it has to be configured to suit the collector. If there is no authentication, leave the `Secret` blank. You can use the `#{Secret}` replacement token to insert the secret into the URL or the header values. Common configurations are: - **Token in the URL** - Remove the token from the URL and replace it with `#{Secret}`. Place the token into the `Secret` field. - **Custom Header** - Add a header with the required key and value of `#{Secret}` - **Bearer Authentication** - Add a header with key `Authorization` and value `Bearer #{Secret}` or `Bearer #{Secret | ToBase64}` if the secret needs to be Base64 encoded ### Streaming to Splunk An **HTTP Event Collector** is required to stream audit events to Splunk. See the Splunk documentation for [how to set up an HTTP Event Collector](https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector). Once you have set up a collector, you will need to provide two configuration values in Octopus: - **Splunk Endpoint URL**: The base URL of your Splunk instance - **Token**: The Token Value of your HTTP Event Collector ### Streaming to Sumo Logic An **HTTP Logs and Metrics Source** is required to stream audit events to Sumo Logic. See the Sumo Logic documentation for [how to set up an HTTP Logs and Metrics Source](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/). Once you have set up a collector, you will need to provide a single configuration value in Octopus: - **Sumo Logic Endpoint URL**: The URL of your HTTP Source. This is treated as a sensitive value as the token for the collector is included in the URL ### Updating the Audit Stream Once you have saved an initial configuration of the audit stream, the status on the UI will update to reflect that streaming is now enabled. Any new audit events will also be streamed to your SIEM solution. You can change the audit stream configuration by clicking **Stream Audit Log** again. This will open a pop-up menu with the following options: - **Edit**: You can select a different SIEM provider or make changes to the configured endpoint. - **Pause/Resume**: You can pause audit streaming, preventing any new audit events from being streamed to the configured endpoint. This will show as **Resume** if the audit stream is already paused. - **Delete**: You can delete the audit stream configuration, which will clear any data relating to the audit stream and prevent any new audit events from being streamed. ![Update Audit Stream](/docs/img/security/users-and-teams/auditing/images/audit-stream-update.png) # Default permissions for built-in user roles Source: https://octopus.com/docs/security/users-and-teams/default-permissions.md ## Build Server {#DefaultPermissions-BuildServer} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | BuildInformationAdminister | Replace or delete build information | | BuildInformationPush | Create/update build information | | BuiltInFeedAdminister | Replace or delete packages in the built-in package repository | | BuiltInFeedDownload | Retrieve the contents of packages in the built-in package repository | | BuiltInFeedPush | Push new packages to the built-in package repository | | DeploymentCreate | Deploy releases to target environments | | DeploymentView | View deployments | | EnvironmentView | View environments | | FeedView | View package feeds and the packages in them | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | ProcessView | View the deployment process and channels associated with a project | | ProjectView | View the details of projects | | ReleaseCreate | Create a release for a project | | ReleaseView | View a release of a project | | RunbookEdit | Edit runbooks | | RunbookRunCreate | Create runbook runs | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskView | View summary-level information associated with a task | | TenantView | View tenants | ## Certificate Manager {#DefaultPermissions-CertificateManager} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | CertificateCreate | Create certificates | | CertificateDelete | Delete certificates | | CertificateEdit | Edit certificates | | CertificateExportPrivateKey | Export certificate private-keys | | CertificateView | View certificates | | EnvironmentView | View environments | | TenantView | View tenants | ## Deployment Creator {#DefaultPermissions-DeploymentCreator} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | DeploymentCreate | Deploy releases to target environments | | DeploymentView | View deployments | | EnvironmentView | View environments | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | ProcessView | View the deployment process and channels associated with a project | | ProjectView | View the details of projects | | ReleaseView | View a release of a project | | RunbookRunCreate | Create runbook runs | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskView | View summary-level information associated with a task | | TenantView | View tenants | ## Environment Manager {#DefaultPermissions-EnvironmentManager} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | AccountCreate | Create accounts | | AccountDelete | Delete accounts | | AccountEdit | Edit accounts | | AccountView | View accounts | | CertificateView | View certificates | | EnvironmentCreate | Create environments | | EnvironmentDelete | Delete environments | | EnvironmentEdit | Edit environments | | EnvironmentView | View environments | | MachineCreate | Create machines | | MachineDelete | Delete machines | | MachineEdit | Edit machines | | MachinePolicyCreate | Create health check policies | | MachinePolicyDelete | Delete health check policies | | MachinePolicyEdit | Edit health check policies | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProxyCreate | Create proxies | | ProxyDelete | Delete proxies | | ProxyEdit | Edit proxies | | ProxyView | View proxies | | TaskCancel | Cancel server tasks | | TaskCreate | Explicitly create (run) server tasks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | WorkerEdit | Edit workers and worker pools | | WorkerView | View the workers in worker pools | ## Environment Viewer {#DefaultPermissions-EnvironmentViewer} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | AccountView | View accounts | | CertificateView | View certificates | | EnvironmentView | View environments | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProxyView | View proxies | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | WorkerView | View the workers in worker pools | ## Insights Report Manager {#DefaultPermissions-InsightsReportManager} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | EnvironmentView | View environments | | InsightsReportCreate | Create Insights reports | | InsightsReportDelete | Delete Insights reports | | InsightsReportEdit | Edit Insights reports | | InsightsReportView | View Insights reports | | ProcessView | View the deployment process and channels associated with a project | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | TenantView | View tenants | ## Package Publisher {#DefaultPermissions-PackagePublisher} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | BuildInformationAdminister | Replace or delete build information | | BuildInformationPush | Create/update build information | | BuiltInFeedAdminister | Replace or delete packages in the built-in package repository | | BuiltInFeedDownload | Retrieve the contents of packages in the built-in package repository | | BuiltInFeedPush | Push new packages to the built-in package repository | | FeedView | View package feeds and the packages in them | ## Project Contributor {#DefaultPermissions-ProjectContributor} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | UserRoleView | View other user's roles | | UserView | View users | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ActionTemplateCreate | Create step templates | | ActionTemplateDelete | Delete step templates | | ActionTemplateEdit | Edit step templates | | ActionTemplateView | View step templates | | ArtifactCreate | Manually create artifacts | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | DefectReport | Block a release from progressing to the next lifecycle phase | | DefectResolve | Unblock a release so it can progress to the next phase | | DeploymentView | View deployments | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | FeedView | View package feeds and the packages in them | | InterruptionView | View interruptions generated during deployments | | InterruptionViewSubmitResponsible | Take responsibility for and submit interruptions generated during deployments when the user is in a designated responsible team | | LibraryVariableSetCreate | Create library variable sets | | LibraryVariableSetDelete | Delete library variable sets | | LibraryVariableSetEdit | Edit library variable sets | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProcessEdit | Edit the deployment process and channels associated with a project | | ProcessView | View the deployment process and channels associated with a project | | ProjectEdit | Edit project details | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | ReleaseView | View a release of a project | | RunbookEdit | Edit runbooks | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskCreate | Explicitly create (run) server tasks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerCreate | Create triggers | | TriggerDelete | Delete triggers | | TriggerEdit | Edit triggers | | TriggerView | View triggers | | VariableEdit | Edit variables belonging to a project | | VariableView | View variables belonging to a project or library variable set | ## Project Deployer {#DefaultPermissions-ProjectDeployer} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | UserRoleView | View other user's roles | | UserView | View users | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ActionTemplateCreate | Create step templates | | ActionTemplateDelete | Delete step templates | | ActionTemplateEdit | Edit step templates | | ActionTemplateView | View step templates | | ArtifactCreate | Manually create artifacts | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | DefectReport | Block a release from progressing to the next lifecycle phase | | DefectResolve | Unblock a release so it can progress to the next phase | | DeploymentCreate | Deploy releases to target environments | | DeploymentView | View deployments | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | FeedView | View package feeds and the packages in them | | InterruptionSubmit | Take responsibility for and submit interruptions generated during deployments | | InterruptionView | View interruptions generated during deployments | | InterruptionViewSubmitResponsible | Take responsibility for and submit interruptions generated during deployments when the user is in a designated responsible team | | LibraryVariableSetCreate | Create library variable sets | | LibraryVariableSetDelete | Delete library variable sets | | LibraryVariableSetEdit | Edit library variable sets | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProcessEdit | Edit the deployment process and channels associated with a project | | ProcessView | View the deployment process and channels associated with a project | | ProjectEdit | Edit project details | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | ReleaseView | View a release of a project | | RunbookEdit | Edit runbooks | | RunbookRunCreate | Create runbook runs | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskCancel | Cancel server tasks | | TaskCreate | Explicitly create (run) server tasks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerCreate | Create triggers | | TriggerDelete | Delete triggers | | TriggerEdit | Edit triggers | | TriggerView | View triggers | | VariableEdit | Edit variables belonging to a project | | VariableView | View variables belonging to a project or library variable set | ## Project Initiator {#DefaultPermissions-ProjectInitiator} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | UserRoleView | View other user's roles | | UserView | View users | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | DefectReport | Block a release from progressing to the next lifecycle phase | | DefectResolve | Unblock a release so it can progress to the next phase | | DeploymentView | View deployments | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | InterruptionView | View interruptions generated during deployments | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | MachinePolicyView | View health check policies | | ProcessView | View the deployment process and channels associated with a project | | ProjectCreate | Create projects | | ProjectDelete | Delete projects | | ProjectEdit | Edit project details | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | ReleaseView | View a release of a project | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerView | View triggers | ## Project Lead {#DefaultPermissions-ProjectLead} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | UserRoleView | View other user's roles | | UserView | View users | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ActionTemplateCreate | Create step templates | | ActionTemplateDelete | Delete step templates | | ActionTemplateEdit | Edit step templates | | ActionTemplateView | View step templates | | ArtifactCreate | Manually create artifacts | | ArtifactDelete | Delete artifacts | | ArtifactEdit | Edit the details describing artifacts | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | DefectReport | Block a release from progressing to the next lifecycle phase | | DefectResolve | Unblock a release so it can progress to the next phase | | DeploymentView | View deployments | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | FeedView | View package feeds and the packages in them | | InterruptionView | View interruptions generated during deployments | | InterruptionViewSubmitResponsible | Take responsibility for and submit interruptions generated during deployments when the user is in a designated responsible team | | LibraryVariableSetCreate | Create library variable sets | | LibraryVariableSetDelete | Delete library variable sets | | LibraryVariableSetEdit | Edit library variable sets | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProcessEdit | Edit the deployment process and channels associated with a project | | ProcessView | View the deployment process and channels associated with a project | | ProjectEdit | Edit project details | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | ReleaseCreate | Create a release for a project | | ReleaseDelete | Delete a release of a project | | ReleaseEdit | Edit a release of a project | | ReleaseView | View a release of a project | | RunbookEdit | Edit runbooks | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskCreate | Explicitly create (run) server tasks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerCreate | Create triggers | | TriggerDelete | Delete triggers | | TriggerEdit | Edit triggers | | TriggerView | View triggers | | VariableEdit | Edit variables belonging to a project | | VariableView | View variables belonging to a project or library variable set | ## Project Viewer {#DefaultPermissions-ProjectViewer} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | UserRoleView | View other user's roles | | UserView | View users | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | DeploymentView | View deployments | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | InterruptionView | View interruptions generated during deployments | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | MachinePolicyView | View health check policies | | ProcessView | View the deployment process and channels associated with a project | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | ReleaseView | View a release of a project | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerView | View triggers | ## Release Creator {#DefaultPermissions-ReleaseCreator} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | EnvironmentView | View environments | | FeedView | View package feeds and the packages in them | | ProcessView | View the deployment process and channels associated with a project | | ProjectView | View the details of projects | | ReleaseCreate | Create a release for a project | | ReleaseView | View a release of a project | | RunbookEdit | Edit runbooks | | RunbookView | View runbooks | ## Runbook Consumer {#DefaultPermissions-RunbookConsumer} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | FeedView | View package feeds and the packages in them | | InterruptionView | View interruptions generated during deployments | | LibraryVariableSetView | View library variable sets | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | RunbookRunCreate | Create runbook runs | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerView | View triggers | ## Runbook Producer {#DefaultPermissions-RunbookProducer} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | ActionTemplateCreate | Create step templates | | ActionTemplateDelete | Delete step templates | | ActionTemplateEdit | Edit step templates | | ActionTemplateView | View step templates | | ArtifactCreate | Manually create artifacts | | ArtifactDelete | Delete artifacts | | ArtifactEdit | Edit the details describing artifacts | | ArtifactView | View the artifacts created manually and during deployment | | CertificateView | View certificates | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | FeedView | View package feeds and the packages in them | | InterruptionSubmit | Take responsibility for and submit interruptions generated during deployments | | InterruptionView | View interruptions generated during deployments | | InterruptionViewSubmitResponsible | Take responsibility for and submit interruptions generated during deployments when the user is in a designated responsible team | | LibraryVariableSetCreate | Create library variable sets | | LibraryVariableSetDelete | Delete library variable sets | | LibraryVariableSetEdit | Edit library variable sets | | LibraryVariableSetView | View library variable sets | | LifecycleView | View lifecycles | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProjectCreate | Create projects | | ProjectDelete | Delete projects | | ProjectEdit | Edit project details | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | RunbookEdit | Edit runbooks | | RunbookRunCreate | Create runbook runs | | RunbookRunDelete | Delete runbook runs | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | TaskCancel | Cancel server tasks | | TaskCreate | Explicitly create (run) server tasks | | TaskView | View summary-level information associated with a task | | TeamView | View teams | | TenantView | View tenants | | TriggerCreate | Create triggers | | TriggerDelete | Delete triggers | | TriggerEdit | Edit triggers | | TriggerView | View triggers | | VariableEdit | Edit variables belonging to a project | | VariableView | View variables belonging to a project or library variable set | ## Space Manager {#DefaultPermissions-SpaceManager} | System Permission | Description | | --------------------------- | ---------------------------------------- | | TeamView | View teams | | UserRoleView | View other user's roles | | UserView | View users | | Space Permission | Description | | --------------------------- | ---------------------------------------- | | AccountCreate | Create accounts | | AccountDelete | Delete accounts | | AccountEdit | Edit accounts | | AccountView | View accounts | | ActionTemplateCreate | Create step templates | | ActionTemplateDelete | Delete step templates | | ActionTemplateEdit | Edit step templates | | ActionTemplateView | View step templates | | ArtifactCreate | Manually create artifacts | | ArtifactDelete | Delete artifacts | | ArtifactEdit | Edit the details describing artifacts | | ArtifactView | View the artifacts created manually and during deployment | | BuildInformationAdminister | Replace or delete build information | | BuildInformationPush | Create/update build information | | BuiltInFeedAdminister | Replace or delete packages in the built-in package repository | | BuiltInFeedDownload | Retrieve the contents of packages in the built-in package repository | | BuiltInFeedPush | Push new packages to the built-in package repository | | CertificateCreate | Create certificates | | CertificateDelete | Delete certificates | | CertificateEdit | Edit certificates | | CertificateExportPrivateKey | Export certificate private-keys | | CertificateView | View certificates | | DefectReport | Block a release from progressing to the next lifecycle phase | | DefectResolve | Unblock a release so it can progress to the next phase | | DeploymentCreate | Deploy releases to target environments | | DeploymentDelete | Delete deployments | | DeploymentView | View deployments | | EnvironmentCreate | Create environments | | EnvironmentDelete | Delete environments | | EnvironmentEdit | Edit environments | | EnvironmentView | View environments | | EventView | View Events, including access to the Audit screen | | FeedEdit | Edit feeds | | FeedView | View package feeds and the packages in them | | GitCredentialEdit | Edit Git credentials | | GitCredentialView | View Git credentials | | InterruptionSubmit | Take responsibility for and submit interruptions generated during deployments | | InterruptionView | View interruptions generated during deployments | | InterruptionViewSubmitResponsible | Take responsibility for and submit interruptions generated during deployments when the user is in a designated responsible team | | LibraryVariableSetCreate | Create library variable sets | | LibraryVariableSetDelete | Delete library variable sets | | LibraryVariableSetEdit | Edit library variable sets | | LibraryVariableSetView | View library variable sets | | LifecycleCreate | Create lifecycles | | LifecycleDelete | Delete lifecycles | | LifecycleEdit | Edit lifecycles | | LifecycleView | View lifecycles | | MachineCreate | Create machines | | MachineDelete | Delete machines | | MachineEdit | Edit machines | | MachinePolicyCreate | Create health check policies | | MachinePolicyDelete | Delete health check policies | | MachinePolicyEdit | Edit health check policies | | MachinePolicyView | View health check policies | | MachineView | View machines | | ProcessEdit | Edit the deployment process and channels associated with a project | | ProcessView | View the deployment process and channels associated with a project | | ProjectCreate | Create projects | | ProjectDelete | Delete projects | | ProjectEdit | Edit project details | | ProjectGroupCreate | Create project groups | | ProjectGroupDelete | Delete project groups | | ProjectGroupEdit | Edit project groups | | ProjectGroupView | View project groups | | ProjectView | View the details of projects | | ProxyCreate | Create proxies | | ProxyDelete | Delete proxies | | ProxyEdit | Edit proxies | | ProxyView | View proxies | | ReleaseCreate | Create a release for a project | | ReleaseDelete | Delete a release of a project | | ReleaseEdit | Edit a release of a project | | ReleaseView | View a release of a project | | RunbookEdit | Edit runbooks | | RunbookRunCreate | Create runbook runs | | RunbookRunDelete | Delete runbook runs | | RunbookRunView | View runbook runs | | RunbookView | View runbooks | | SubscriptionCreate | Create subscriptions | | SubscriptionDelete | Delete subscriptions | | SubscriptionEdit | Edit subscriptions | | SubscriptionView | View subscriptions | | TagSetCreate | Create tag sets | | TagSetDelete | Delete tag sets | | TagSetEdit | Edit tag sets | | TaskCancel | Cancel server tasks | | TaskCreate | Explicitly create (run) server tasks | | TaskEdit | Edit server tasks | | TaskView | View summary-level information associated with a task | | TeamCreate | Create teams | | TeamDelete | Delete teams | | TeamEdit | Edit teams | | TeamView | View teams | | TenantCreate | Create tenants | | TenantDelete | Delete tenants | | TenantEdit | Edit tenants | | TenantView | View tenants | | TriggerCreate | Create triggers | | TriggerDelete | Delete triggers | | TriggerEdit | Edit triggers | | TriggerView | View triggers | | VariableEdit | Edit variables belonging to a project | | VariableEditUnscoped | Edit non-environment scoped variables belonging to a project or library variable set | | VariableView | View variables belonging to a project or library variable set | | VariableViewUnscoped | View non-environment scoped variables belonging to a project or library variable set | | WorkerEdit | Edit workers and worker pools | | WorkerView | View the workers in worker pools | ## System Administrator {#DefaultPermissions-SystemAdministrator} | System Permission | Description | |----------------------|------------------------------------------------------------------------------------------------------------------------------------------| | AdministerSystem | Perform system-level functions like configuring HTTP web hosting, the public URL, server nodes, maintenance mode, and server diagnostics | | ConfigureServer | Configure server settings like Authentication, SMTP, and HTTP Security Headers | | EventRetentionDelete | Delete archived event files | | EventRetentionView | View/list archived event files | | EventView | View Events, including access to the Audit screen | | PlatformHubEdit | Edit Platform Hub configuration and resources | | PlatformHubView | View Platform Hub configuration and resources | | SpaceCreate | Create spaces | | SpaceDelete | Delete spaces | | SpaceEdit | Edit spaces | | SpaceView | View spaces | | TaskCancel | Cancel server tasks | | TaskCreate | Explicitly create (run) server tasks | | TaskEdit | Edit server tasks | | TaskView | View summary-level information associated with a task | | TeamCreate | Create teams | | TeamDelete | Delete teams | | TeamEdit | Edit teams | | TeamView | View teams | | UserEdit | Edit users | | UserInvite | Invite users to register accounts | | UserRoleEdit | Edit user role definitions | | UserRoleView | View other user's roles | | UserView | View users | ## System Manager {#DefaultPermissions-SystemManager} | System Permission | Description | |----------------------|--------------------------------------------------------------------------------| | ConfigureServer | Configure server settings like Authentication, SMTP, and HTTP Security Headers | | EventRetentionDelete | Delete archived event files | | EventRetentionView | View/list archived event files | | EventView | View Events, including access to the Audit screen | | PlatformHubEdit | Edit Platform Hub configuration and resources | | PlatformHubView | View Platform Hub configuration and resources | | SpaceCreate | Create spaces | | SpaceDelete | Delete spaces | | SpaceEdit | Edit spaces | | SpaceView | View spaces | | TaskCancel | Cancel server tasks | | TaskCreate | Explicitly create (run) server tasks | | TaskEdit | Edit server tasks | | TaskView | View summary-level information associated with a task | | TeamCreate | Create teams | | TeamDelete | Delete teams | | TeamEdit | Edit teams | | TeamView | View teams | | UserEdit | Edit users | | UserInvite | Invite users to register accounts | | UserRoleEdit | Edit user role definitions | | UserRoleView | View other user's roles | | UserView | View users | ## Tenant Manager {#DefaultPermissions-TenantManager} | Space Permission | Description | | --------------------------- | ---------------------------------------- | | TenantCreate | Create tenants | | TenantDelete | Delete tenants | | TenantEdit | Edit tenants | | TenantView | View tenants | # System and space permissions Source: https://octopus.com/docs/security/users-and-teams/system-and-space-permissions.md Octopus Deploy **2019.1** and above, supports partitioning your server up into [Spaces](/docs/administration/spaces) which enables teams to stay focused on only the projects and content that matter to those teams. As a result, permission scoping needs to respect boundaries that support both the administration of the whole server, as well as support the administration of an individual space. This introduces some complexity that can be useful to understand when things don't work quite the way you expect. Reading this page should give you a general understanding of how permissions work in these two contexts. ## Levels of permission While designing this feature, we needed to reason about which API resources needed to be configured _outside_ of a space and which resources should _only_ be configured _within_ a space. That means that when considering permissions, we need to think in terms of the two administrative use cases of an Octopus Deploy instance - administering the system itself, and administering a space. Since these are very different things, permissions need to be considered as applying at these two 'levels': the **System** and **Space** levels. These levels are in fact attached to the nature of the API resources themselves, if a resource is considered _space only_ then permissions required to access that resource are considered to be space level permissions. When you design or inspect your own custom **user roles**, we present this information to help you reason about the types of permissions you are granting that role, so that you can appropriately restrict access to the various resources that you care about. ### What is a 'system level' permission? **System** level permissions are those that involve administering the entire system, but do not include permissions within an individual space. An example of system level permissions are the **User** permissions, since users are not scoped to a space. ### What is a 'space level' permission? **Space** level permissions are those that apply to resources within spaces, for example, **Projects** and **Environments**. As an example, a team of users with **ProjectView** permission in the **Finance Dept.** space can see projects in that space. In order to allow them to view projects in the **IT Dept.** space, they need to be a member of a Team that had **ProjectView** permission in that space. ### Can permissions apply at both levels? Yes, in some special cases, permissions can apply at both levels. A good example here is **Teams**. In order to support the two administrative use cases mentioned earlier, it is conceivable that some teams would be required to operate across all spaces, whereas other teams would not. As such, when creating a team, the team can be marked as 'Accessible in all spaces' (i.e a system level team) or 'Accessible in **Finance Dept.** space only' where **Finance Dept.** is the name of the currently selected space (i.e. a space level team). ## What does this mean for configuring user roles and teams? When we create or edit user roles, we can choose a combination of system and space level permissions. Since not all scenarios are compatible when mixing system or space level concerns, some rules exist when applying user roles to teams. ### Rules of the road {#SystemAndSpacePermissions-RulesOfTheRoad} When you're including a user role in a team, that role will apply at either the space or system level. This is due to the roles constituent permissions needing to be applied at different levels. #### Roles with system level permissions only If the role only contains system level permissions, then the role will be automatically applied at the system level. In addition, roles of this nature can only be used for _system only_ teams. Applying a set of system permissions to a _space team_ is not permitted. #### Roles with a combination of system and space level permissions However, a user role can be created with a combination of both system and space level permissions. When adding a role, if that role contains *any* space permissions, then the role will be applied at the space level. There are two potential outcomes for this space assignment: 1. If the team you are editing is a space team, then the role is assigned to the space that team belongs to. 2. If the team is a system team, then the user is prompted to pick the space that it is assigned to. Any of the system level permissions from that role are then implicitly assigned at the system level. # Configurable Timeouts and Session Invalidation Source: https://octopus.com/docs/security/users-and-teams/timeouts-and-invalidation.md :::div{.hint} Configurable Session timeouts and Session invalidation was added in Octopus **2022.2**. ::: Octopus supports invalidating user sessions using a configurable timeout or explicitly invalidating a user's session. ## Configurable Timeouts {#TimeoutsAndInvalidation-ConfigurableTimeouts} You can configure **Session Timeouts** in Octopus to force re-authentication after a specified time. By default, session timeouts are set to 20 minutes. This timeout can be changed by a System Administrator and applies to all users in an instance. To change the Session Timeout duration, navigate to **Configuration ➜ Settings ➜ Authentication** in the Octopus Web Portal, and enter the Session Timeout duration (in seconds) and click **SAVE**. There is also a **Maximum Session Duration**, which applies when users click the `Remember Me` option when signing into Octopus. By default, this option is set to 20 days. Enter the desired maximum session timeout duration (in seconds) and click **SAVE**. :::figure ![Configurable Timeout Image](/docs/img/security/users-and-teams/images/configurable-timeout.png) ::: ## Session Invalidation {#TimeoutsAndInvalidation-SessionInvalidation} A user's sessions can explicitly be revoked. This ensures that a user cannot interact with the system until after they have re-authenticated. This can be particularly useful in the following scenarios: - An employee reports suspected malicious activity on their account - Known malicious activity is identified - Employee offboarding/role change Any user can revoke their own sessions, or anyone with `AdministerSystem` or `UserEdit` permissions can also revoke sessions of other users. To invalidate sessions of your own account, perform the following steps: 1. Log into the Octopus Web Portal, click your profile image and select **Profile**. 1. Click the overflow menu (`...`) and choose **Revoke Sessions** :::figure ![Session invalidation of your account](/docs/img/security/users-and-teams/images/session-invalidation-profile.png) ::: To invalidate sessions of another user, perform the following steps: 1. Navigate to **Configuration ➜ Users**. 1. Select the User whose sessions you wish to revoke. 1. Click the overflow menu (`...`) and choose **Revoke Sessions**. ![Session invalidation of another user's account](/docs/img/security/users-and-teams/images/session-invalidation-admin.png) # User roles Source: https://octopus.com/docs/security/users-and-teams/user-roles.md User roles and group permissions play a major part in the Octopus security model. These roles are assigned to Teams and they dictate what the members of those teams can do in Octopus. ## Built-in user roles {#UserRoles-Built-inUserRoles} Octopus comes with a set of built-in user roles that are designed to work for most common scenarios: | User role | Description | | -------------------- | ---------------------------------------- | | Build Server | Build servers can publish packages, and create releases, deployments, runbook snapshots and runbook runs. | | Certificate Manager | Certificate managers can edit certificates and export private-keys | | Deployment Creator | Deployment creators can create new deployments and runbook runs. | | Environment Manager | Environment managers can view and edit environments and their machines. | | Environment Viewer | Environment viewers can view environments and their machines, but not edit them. | | Package Publisher | Permits packages to be pushed to the Octopus Server's built-in NuGet feed. | | Project Viewer | Project viewers have read-only access to a project. They can see a project in their dashboard, view releases and deployments. Restrict this role by project to limit it to a subset of projects, and restrict it by environment to limit which environments they can view deployments to. | | Project Contributor | All project viewer permissions, plus: editing and viewing variables, editing the deployment steps. Project contributors can't create or deploy releases. | | Project Initiator | All project viewer permissions, plus: create new projects. | | Project Deployer | All project contributor permissions, plus: deploying releases, but not creating them. | | Project Lead | All project contributor permissions, plus: creating releases, but not deploying them. | | Release Creator | Release creators can create new releases and runbook snapshots. | | Runbook Consumer | Runbook consumers can view and execute runbooks. | | Runbook Producer | Runbook producers can edit and execute runbooks. | | System Administrator | System administrators can do everything at the system level. | | System Manager | System managers can do everything at the system level except certain system-level functions reserved for system administrators. | | Tenant Manager | Tenant managers can edit tenants and their tags | The built-in user roles can be modified to contain more or less roles to suit specific needs. But instead of modifying the built-in ones, we recommend that you leave them as an example and instead create your own user roles. :::div{.success} To view the default permissions for each of the built-in user roles, please see [default permissions](/docs/security/users-and-teams/default-permissions). ::: ### Additional user roles for spaces | User Role | Description | | -------------------- | ---------------------------------------- | | Space Manager | Space managers can do everything within the context of the space they own. | :::div{.success} For more information regarding the _system or space level_, please see [system and space permissions](/docs/security/users-and-teams/system-and-space-permissions). ::: ## Creating user roles {#UserRoles-CreatingUserRoles} A custom User Role can be created with any combination of permissions. To create a custom user role: 1. Under the **Configuration** page, click **Roles**. ![](/docs/img/security/users-and-teams/images/roles-link.png) 2. Click **Add custom role**. 3. Select the set of permissions you'd like this new User Role to contain, and give the role a name and description. These can be system or space level permissions. ![](/docs/img/security/users-and-teams/images/select-permissions.png) Once the custom role is saved, the new role will be available to be assigned to teams in Octopus. [Some rules apply](/docs/security/users-and-teams/system-and-space-permissions/#SystemAndSpacePermissions-RulesOfTheRoad), depending on the mix of system or space level permissions you chose. When applying roles to a team, you can optionally specify a scope for each role applied. This enables some complex scenarios, like granting a team [different levels of access](/docs/security/users-and-teams/creating-teams-for-a-user-with-mixed-environment-privileges) based on the environment they are authorized for. :::figure ![](/docs/img/security/users-and-teams/images/define-scope-for-user-role.png) ::: ## Troubleshooting permissions {#UserRoles-TroubleshootingPermissions} If for some reason a user has more/fewer permissions than they should, you can use the **Test Permissions** feature to get an easy to read list of all the permissions that a specific user has on the Octopus instance. To test the permissions go to **Configuration ➜ Test Permissions** and select a user from the drop-down. The results will show: - The teams of which the user is a member of. There are two separate Permission context that you can check. - **Show System permissions** will show [System level permissions](/docs/security/users-and-teams/system-and-space-permissions) - **Show permissions within a specific space** will show [Space specific Permissions](/docs/security/users-and-teams/system-and-space-permissions). - A chart detailing each role and on which Environment/Project this permission can be executed. The chart can be exported to a CSV file by clicking the Export button. Once the file is downloaded it can viewed in browser using [Online CSV Editor and Viewer](https://www.convertcsv.com/csv-viewer-editor.htm). :::figure ![](/docs/img/security/users-and-teams/images/systempermissions.png) ::: ![](/docs/img/security/users-and-teams/images/spacelevelpermissions.png) If a user tries to perform an action without having enough permissions to do it, an error message will pop up showing which permissions the user is lacking, and which teams actually have these permissions. :::figure ![](/docs/img/security/users-and-teams/images/errors.png) ::: :::div{.warning} As further versions of Octopus are released, we might create new roles to improve our security model. These new roles will not be automatically included in any of the built-in user roles, to avoid giving users permissions they are not supposed to have. These new roles will have to be added manually to a User Role by an administrator. ::: # Increase the Octopus Server task cap Source: https://octopus.com/docs/support/increase-the-octopus-server-task-cap.md Octopus limits the number of tasks it can run in parallel to a default of five tasks. If you are running the self-hosted version of Octopus and you find yourself needing to change this limit, you can do so with the steps outlined on this page. :::div{.hint} If you're running [Octopus Cloud](/docs/octopus-cloud), your task cap is controlled by Octopus. To discuss changing your task cap in Octopus Cloud, [get in touch with us](https://octopus.com/company/contact). ::: Under **Configuration ➜ Nodes** select your Octopus Node. 1. Select the overflow menu (`...`). 2. Select **Change Task Cap**: ![Nodes](/docs/img/support/images/taskcap.png) 3. In the new window you can select a new maximum synchronous Task Cap and save: ![Task caps](/docs/img/support/images/taskcap2.png) Increasing the task cap will increase the maximum number of tasks the Octopus Server can run simultaneously. This should be increased with caution, as Octopus will require more system resources to handle the increased limit. For information specific to High Availability nodes and task caps please see the following documentation page. [Maintaining High Availability nodes](/docs/administration/high-availability/maintain/maintain-high-availability-nodes) # Prioritize Tasks Source: https://octopus.com/docs/tasks/prioritize-tasks.md :::div{.info} From version `2025.2.7584`, the following features require an **Enterprise** tier subscription: - Priority lifecycle phases - Priority deployments - Priority runbooks ::: ## Understanding task prioritization in Octopus When Octopus runs many deployments or runbooks at the same time, tasks are placed into a queue and processed in the order they were created. This can delay critical work, such as a production hotfix. To help with urgent or important jobs, Octopus provides three ways to control task priority: 1. **Task queue prioritization (Move to Top)** - Best for unexpected, one-off situations. - Use this when you need to run a queued task immediately, such as a hotfix. 2. **Priority deployments and runbooks** - Best for proactive prioritization of important work. - Use this when you want to prioritize a specific deployment or runbook. 3. **Priority lifecycle phases** - Best for consistent, rules-based prioritization. - Use this when you want deployments to an entire lifecycle phase (for example, Production) to always take precedence. :::div{.warning} When prioritizing a deployment, cancel any other queued deployments to the same environment. Otherwise, another queued release could overwrite your prioritized deployment. ::: ### Task processing order From version `2024.4`, tasks are processed in this order: 1. Queued tasks that are moved to the top 2. Tasks from prioritized deployments, runbooks, or lifecycle phases 3. Regular tasks Within each category, tasks run on a **first in, first out** basis. ## Task queue prioritization (Move to Top) From version `2023.4.6612`, you can manually move a queued task to the top of the queue. This option is useful when you need to quickly prioritize a one-off task, such as a hotfix. You can prioritize tasks in two ways: - **On the Tasks page**: Select the overflow menu `(...)` on a queued task, then choose **Move to Top**. :::figure ![Tasks page showing the 'Move to Top' button in a task's overflow menu.](/docs/img/tasks/images/tasks-move-to-top.png) ::: - **On the Task details page**: Select **Move to Top**. :::figure ![Release page showing 'Move to top' button.](/docs/img/tasks/images/release-move-to-top.png) ::: ## Priority deployments and runbooks From version `2025.2.7584`, you can prioritize an individual deployment or runbook. This option is useful for proactively prioritizing important tasks, such as production deployments or runbooks that manage critical infrastructure. To prioritize a deployment or runbook: 1. On the **Deploy release** or **Run snapshot** page, select the **Prioritize this (deployment/runbook)** checkbox. 2. When the task is created, it runs before non-prioritized tasks. :::figure ![Deploy release page showing the selected 'Prioritize this deployment' checkbox.](/docs/img/tasks/images/deployment-priority.png) ::: ## Priority lifecycle phases From version `2024.4`, you can prioritize a phase within a [lifecycle](/docs/releases/lifecycles). This option is useful when you want all deployments to an entire phase (for example, Production) to take precedence. To prioritize a lifecycle phase: 1. When configuring a **Phase** within a **Lifecycle**, select the **Prioritize this phase** checkbox. 2. When a deployment reaches this phase, it runs before non-prioritized tasks. :::figure ![Lifecycle configuration page showing the selected 'Prioritize this phase' checkbox.](/docs/img/tasks/images/lifecycle-priority.png) ::: # Superseded Tasks Source: https://octopus.com/docs/tasks/superseded-tasks.md Sometimes multiple deployment or runbook run tasks for the same project/environment/tenant combination will be waiting in the task queue. Often this means some of the tasks are superseded and no longer required. Octopus can help you clean these tasks up automatically by cancelling them. :::div{.info} This feature is available from version `2025.2.7727`. ::: ## Configuration The task cancellation behavior is configured per deployment or runbook process. It is on by default for new projects and runbooks, you can customize the setting via the project's deployment settings or on each runbook. :::figure ![Cancel existing tasks settings.](/docs/img/tasks/images/cancel-task-settings.png) ::: There are two settings that affect when superseded tasks are cancelled. ### Cancel queued tasks When a project’s lifecycle auto-deploys new releases and releases are created faster than they are deployed, the task queue fills with deployments to the same project/environment/tenant combination. This is a scenario that sometimes happens with external triggers. When a new task is queued, Octopus will cancel older queued tasks that would’ve been overwritten by the latest task. The new task then takes the place of the earliest cancelled task in the queue. ### Cancel running tasks The Argo CD steps can be configured to a pull request for the required Git repository changes and then wait it to be merged before completing. Other deployment can start executing while the task is waiting for the pull request to be merged. When one of the pull requests is merged and the task completes successfully, the previous tasks and their pull requests are no longer required. Another use case is when the first step of a deployment process is a manual intervention. Often deployments for multiple releases will be waiting for manual approval. Once the manual intervention step is approved and the task completes, the previous tasks are no longer required. When a task completes successfully, Octopus will cancel older tasks that have started but are paused such as waiting for manual approval or pull requests to merge. No cancellation will taken place if the task fails. ## Auditing You can see whether a task was cancelled by system by inspecting the task audit history. :::figure ![Cancelled task audit history.](/docs/img/tasks/images/cancel-task-audit.png) ::: ## Exclusions Sometimes only running the latest task may not yield the same result as having run all intermediate tasks. Octopus takes a safe-by-default approach by not cancelling tasks if it might yield a different result. Tasks that don't run the full process on all targets are not considered for cancellation, these include tasks that have: - Machine filters (include/exclude) - this is either set by a user or auto-deploy from a target trigger - Skipped steps - skipping steps may affect conditional steps that run according to a variable value ### Queued tasks - When the queued task doesn’t run the full process on all targets, no earlier tasks will be cancelled. - When the queued task does run the full process on all targets, Octopus starts cancelling earlier tasks from the back of the queue and stops once it sees a task that doesn't run the full process on all targets, it does not skip the task and keep going. This ensures the task order is preserved. ### Running tasks - When the completed task doesn’t run the full process on all targets, no earlier tasks will be cancelled. - When the completed task does run the full process on all targets, Octopus cancels all earlier tasks that run the full process on all targets.