Introduction


Self-hosting your own Discord bot is an excellent way to have full control over ownership, commands, privacy, and function in your servers. Self-moderation, fun commands, or listening parties can be possible without having to invite an existing, third-party bot on your server, and although you could certainly run it on your own machine you’re at the mercy of power & ISP outages or having to leave your computer on for your friends that have late night voice chat hangouts during prime sleeping hours.

Pycord Example Screenshot

In this guide I show you how I host my own Discord bot, written in Python utilizing Pycord as a base, inside a container running on a small AWS ECS cluster using the following online services:

Free*

Paid*

Following this guide you can expect to pay roughly $9 USD/month if leaving a container running non-stop, depending on usage and Region you have chosen in AWS. You can get these even cheaper utilizing a savings plan, or by switching capacity providers to Fargate Spot if this isn’t a mission critical application you don’t mind going down. I will default to explaining on-demand Fargate here but it’s up to you on how to proceed. Interesting in automating ECS container deployments by pushing to your repo but don’t want to utilize Discord/Python etc? Don’t worry, you can still follow alongside this guide!

Now go ahead and create your AWS, Atlassian BitBucket, and Discord Developer accounts if not already created. Most of this configuration is certainly possible with other services if you are more familiar with them such as another cheaper cloud service provider, a Node.JS based Discord Bot, or CI/CD automation with GitHub Actions, GitLab, or AWS CodePipeline which will not be covered in this guide.


Editors Note


I have since switched from hosting my repository and Docker image pipelines over to GitHub and GitHub Actions, as you are offered 2,000 build minutes for free compared to BitBucket’s 50 build minutes. Unfortunately I do not have the spare time currently to update this guide, and if I do I will add an addendum page, but in the meantime AWS offers an excellent prescriptive guide here which details similar instructions using Terraform and OIDC just like my next section on Advanced Configuration.

Additionally, to skip the pipeline entirely, i.e. you ran out of build minutes, don’t want to configure Bitbucket/GitHub pipelines, or want to push up a new image locally for quicker on-demand testing, you could simply run these following commands, or even better, configure an alias in your RC file and create your own one line command:

docker build -t bot-image-repo .
# At this stage, have your CLI credentials created or in the environment. For example, I go to my SSO login page and export the env vars before proceeding
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 556790590164.dkr.ecr.us-east-1.amazonaws.com/kazop-bot-pycord
docker tag bot-image-repo 123456789.dkr.ecr.us-east-1.amazonaws.com/bot-image-repo
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/bot-image-repo # Replace with your repo URL
aws ecs update-service --cluster bot-ecs-cluster --service bot-ecs-service --force-new-deployment'

See this AWS doc for more information on pushing your own image locally to ECR.


Advanced Configuration


Before continuing with my guide, I have an additional, more advanced options to proceed with a deployment like this. I highly recommending having a higher level knowledge of networking, AWS, and Terraform if you opt to follow these additions, but please feel free to skip and follow the remaining steps as I walk you through each step so you can fully understand how to deploy this project.

To automatically deploy the upcoming BitBucket OIDC roles and AWS Infrastructure, you can jump to the advanced section where I provide the Terraform templates ready to be deployed, and if you just want to head straight to the TF files you can find them here on my GitHub page. These are not set in stone and you can freely copy and modify them as you wish.

You can first follow Preparing Your Environment sections after which I’ll leave another remind to skip the manual deployments because with this option, you can completely skip the AWS Configuration section entirely, but you can review them to clarify any confusions that may come up. This is a huge time saver and cuts down most of this post, so if you are new to AWS I recommend manual configuration to fully understand how it works.


Preparing Your Environment


There will be quite a few sections going into detail on configuring each piece of the CI/CD pipeline, however once set up you can deploy any changes automatically with a simply repository push resulting in minimal manual effort after setup.


Visual Studio Code & BitBucket Repository


If you don’t already use it, I highly recommend Visual Studio Code for its ease of use in writing code and integrations with many services that make the CI/CD and automations possible. Create a folder where you will keep your PythonWhile in VSC, let’s go ahead and install the “Jira and Bitbucket (Atlassian Labs)” extension. As to not reiterate already great guides, you can follow this official Support article from Atlassian to prepare your VC Code, see below for some visuals as well

BitBucket Extension in Visual Studio Code BitBucket Extension Settings Panel

If you don’t already have a Repository in BitBucket, create one now. Make this private, as we’ll have a secret token for the Discord Bot that will be pushed into the pipeline.

Next, refer to this article on cloning the repository to your local machine inside VS Code. My preference is using Git on the command line, on my Windows machine I only need to hit Ctrl + ` , navigate to the + then select “Git Bash”. For Linux/Mac users you can use your normal Terminal

Git Bash Terminal in VS Code

For simplicity sake, I generated and set up SSH Keys with BitBucket to clone over SSH as I will also use this key later to push to our BitBucket Pipeline. On Windows 10 with OpenSSH already installed, I do need to modify the configuration file so the private PEM key is automatically utilized when connecting to BitBucket where my public key lies. The SSH config file can be found at C:\Users\USERNAME\.ssh\config where you also want to move your private PEM key into

Open this file in a text editor and append the following, where my key is named bitbucket-personal.pem

1
2
3
Host bitbucket.org
	AddKeysToAgent yes
	IdentityFile ~/.ssh/bitbucket-personal.pem

Now, head over to your BitBucket repo and click on the “Clone” button which should be near the upper right corner. Here you can copy paste your SSH git clone command in the format below, or even easier there is a button to simply clone in VS Code

1
git clone [email protected]:PROJECT/REPONAME.git

Repo Clone Options

Now we can use three git CLI commands to add new/changed files, commit them to the repository with a short message explaining the changes, and finally the command to update our repository hosted in BitBucket. Once we configure the rest of our CI/CD, this simple series of commands will be what automatically deploys the new version of our Discord bot

1
2
3
git add .
git commit -m "Message explaining what I have updated"
git push

Before continuing, create the following files in this repository for now. We’ll be using all of these shortly but for now leave them blank. Execute this command in your VS Code terminal while in the right directory

touch .env Dockerfile requirements.txt bot.py task-definition.json bitbucket-pipelines.yml

  • .env | Stores our Bot secret token as well as AWS secret values
  • Dockerfile | Definition for the Docker image hosting our Bot
  • requirements.txt | Python libraries to be installed
  • bot.py | The Python script containing our core PyCord Discord Bot code
  • task-definition.json | This is the file used to update the ECS task with our new Docker image
  • bitbucket-pipelines.yml | Definition for the BitBucket pipeline that builds an image from our pushed repository and updates ECR and ECS

Discord Developer Setup


  1. With an existing Discord account, navigate over to the Discord Developer Portal and log in

  2. Once you are on this page, create a new Application, this is what our bot will appear as on this portal. The application itself can be anything as we’ll shortly name the Bot, but you can opt to use the same name for both the Application and Bot as that is all we’ll be doing

  3. With the Application created, on the left Settings tab hit on the Bot tab, hit Add Bot on the right and confirm it with “Yes”

  4. This is where we can further configure our Bot, assign it a profile picture, and also grab the essential secret token that will allow the Bot to come online once the container spins up. This token grants ANYONE access to use your bot with whatever configuration they want, be sure to keep this safe like any other password you have! Go ahead and give it a name you’ll recognize in your server, and right away hit the “Copy” button under Token and save this in the new created .env file formatted as the following

    TOKEN = <Enter your secret token here>

  5. As for settings, you can disable Public Bot unless you don’t mind people inviting the bot to other servers. We also have three options for “Presence Intent”, “Server Members Intent”, and “Message Content Intent”. Since we’re making a small bot that isn’t required to be verified, these can be freely enabled to grant your bot additional features in your server, but I’ll leave that up to you if you opt to use these features

  6. Now invite the Bot to your server, you can do this by going to the OAuth2 tab followed but the URL Generator. For testing purposes we will give the bot Admin access, but as best practice you shoud give it the minimal necessary permissions. Under Scopes click “bot”, and in the new box that appears click on “Administrator”. Under this you’ll now see the Generated URL which you can copy and open in a new tab. This new page will now ask you what server you’d like to invite the Bot into

Discord Bot Permissions Screen Discord Bot Server Select

  1. That’s all for configuration here, again the most important part is the secret token. You’ll notice your created Bot will be offline until a script is launched referencing the token. If you ever lose it or suspect it was stolen, you can come back to this portal to reset and generate a new one

Python Code (Discord Bot)


We’re going to need our Discord bot obviously! For this guide I will provide a sample bot you can use, here is the Python code for this which you can use in your bot.py for a simple test

Alternatively, if you would like to start with a ready-to-run chat bot utilizing ChatGPT, see my repo here

import discord
import os
from dotenv import load_dotenv

load_dotenv()

bot = discord.Bot(command_prefix="!",
                   intents=discord.Intents.all(),
                   status=discord.Status.online
                   )

@bot.slash_command(name="hello",str=None)
async def hello(ctx):
    name = ctx.author.name
    await ctx.respond(f"Hello there, {name}!")

bot.run(os.getenv('TOKEN'))

If you want to learn more about writing your bot code using Pycord, then I highly recommend their documentation you can find here. We are also utilizing the dotenv package, this way we can store our secret token outside of the main code file. While we are still uploading all the files to the private BitBucket repo, this is a best practice in coding to not hard code secret values. We will also be adding our AWS values to this same .env file shortly


Dockerfile


Next is the Dockerfile configuration. This is a straightforward step, you can use the following settings which will grab and install the necessary Python components and adjust as you develop your bot, this uses a virtual environment for the pip installations as well and references an external text file for necessary library installs

FROM python:3.11

RUN mkdir -p /app

WORKDIR /app

COPY . .

RUN python3 -m venv env \
    && /app/env/bin/python3 -m pip install --upgrade pip \
    && /app/env/bin/python3 -m pip install -U -r /app/requirements.txt

CMD [ "/app/env/bin/python3", "bot.py" ]

For the requirements.txt, paste the following in as contents, this makes it easier to add more libraries in the future by adding as a new line

py-cord[speed]
python-dotenv

I recommend configuring Docker Desktop to test building your image and running it as a container on your local machine to confirm your Discord bot properly builds and runs. This can save you the free build minutes on BitBucket and any running container costs on AWS by testing locally first. If you need a more in depth introduction to Docker, I highly recommend a beginners guide such as this Youtube video by Travis.Media, but otherwise you can run the following command in your working directory and see how it’s built:

docker build -t test .

Go ahead and deploy the Discord bot locally as a container to confirm you have no issues before proceeding.

For those of your opting for the Terraform deployment, click here to jump ahead!


AWS Configuration


There are a few things we need to configure in AWS, and to save on time I will assume there is at least some working knowledge of the AWS platform. If not, that is okay as I will be providing the AWS Documentation links to each part of this section, however if you are completely lost there are plenty of fantastic tutorials and beginners guides on YouTube such as this playlist by Simplilearn which introduces you to the platform. I also recommend checking out some of the guides over on the GitHub AWS Open Guides

I will be working in the us-east-1 region for the entirety of this configuration, but feel free to deploy it wherever you may be situated. The initial setup in this section is also done with an IAM User with AdministratorAccess managed policy


BitBucket OIDC


Following AWS IAM best security practices, we’re opting to use OIDC from BitBucket directly for the pipeline’s short-term session based access, I highly recommend following this guide instead on configuring Bitbucket Pipelines OpenID Connect, as not to repeat what they’ve written here I’ll leave the AWS side of it for you to follow along below and specific information you need to add as well. This is a much safer alternative compared to creating an IAM user and Access Keys with long-term credentials, and in my opinion easier to set up

  1. Create the Identity Provider (IDP) as the guide describes, click here for a console link

  2. Create the new role that BitBucket Pipeline will assume by clicking here for a console link and check the “Web identity” option. Choose the IDP provided in Step 1 and the audience too. Fr policies, search for the managed policies “AmazonECS_FullAccess” and “AmazonEC2ContainerRegistryPowerUser”, and add these as managed permissions to the role.

  3. Name the role, I use “bitbucket-oidc-role”. Before you create the role, we’re going to modify the Trust Policy to make it even more secure, hit Edit on this section paste the below, replacing values for your account ID, workspace name (which should be from your OIDC URL as well), and your BitBucket Repository UUID which you can find following the guide above. This only allows Pipelines running in this repository and from Atlassian IP addresses access to this role:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::1234567890:oidc-provider/api.bitbucket.org/2.0/workspaces/WORKSPACE/pipelines-config/identity/oidc"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "api.bitbucket.org/2.0/workspaces/WORKSPACE/pipelines-config/identity/oidc:sub": "{UUID}:*"
                },
                "IpAddress": {
                    "aws:SourceIp": [
                        "34.199.54.113/32",
                        "34.232.25.90/32",
                        "34.232.119.183/32",
                        "34.236.25.177/32",
                        "35.171.175.212/32",
                        "52.54.90.98/32",
                        "52.202.195.162/32",
                        "52.203.14.55/32",
                        "52.204.96.37/32",
                        "34.218.156.209/32",
                        "34.218.168.212/32",
                        "52.41.219.63/32",
                        "35.155.178.254/32",
                        "35.160.177.10/32",
                        "34.216.18.129/32"
                    ]
                }
            }
        }
    ]
}

AWS ECS Task Execution Role


  1. Rounding off this IAM section, we will create the ecsTaskExecutionRole assuming this has not been previously created under your account. First check if this role exists by searching exactly this name, otherwise head over to the IAM Console and create a new role

  2. Under “Use cases for other AWS services:”, choose Elastic Container Service

  3. For Select your use case, choose Elastic Container Service Task, then move on to the next screen

  4. In the Attach permissions policy section, search for AmazonECSTaskExecutionRolePolicy, select the policy, attach, then do the same for CloudWatchLogsFullAccess so we can create and push logs (note you only need logs:CreateLogStream and logs:PutLogEvents so you may create a custom policy if you wish), and then move on

  5. For Role Name, enter the name ecsTaskExecutionRole and create this role



ECR Repository


  1. Head over to the ECR service or click this link to start creating a new repository. This is where our Docker image built by the pipeline will be pushed to, and then pulled by ECS to deploy the container

  2. Set the “Visibility settings” to Private, as we do not want anybody on the public internet pulling the container and getting direct access to the code

  3. Give a name to your Repository, for this guide we’ll be using “bot-image-repo”, whatever you choose is up to you but keep this full repo name saved to the side for later use, for example 123456789.dkr.ecr.us-east-1.amazonaws.com/bot-image-repo:latest, note for this and the Task Definition section 123456789 should be replaced with your AWS Account ID. All other settings can be ignored

  4. Create the repo, and finally we’ll create a simple Lifecycle Rule. Click on your newly created Repository, and on the right side you’ll see Lifecycle Policy, click on this

  5. Click on Create rule, leave “Rule priority” at 1 and set “Image status” to Untagged. Default for “Match critera” will stay at Since image pushed 1 Days. Save this, and now when you update your Docker image it will automatically tag the newly built one as latest and remove the tag for older images which will then expire the next time this Lifecycle rule runs, this saves some cost and cleans up unnecessary images

ECR Lifecycle Rule


ECS Task Definition


Before creating the cluster and service for our bot, we are going to upload our initial Task Definition. This is what actually tells the ECS service what and how we want our container to deploy. Normally you can create one in the Console using an existing image in your ECR repository, however we can simply upload our task-definition.json which is crucial for our BitBucket pipeline. Below is one you can use and base yours off of, this will define the smallest possible container to optimize costs and contains all necessary definitions as well as logging configuration so you can view the logging tab on ECS to see what may have went wrong with your bot. Note you can also follow the links to the CloudWatch Log Group and set the retention to a shorter period to save costs. Copy this to your clipboard to prepare for the next step

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
    "family": "bot-ecs-task",
    "networkMode": "awsvpc",
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "runtimePlatform": {
        "operatingSystemFamily": "LINUX",
        "cpuArchitecture": "X86_64"
    },
    "cpu": "256",
    "memory": "512",
    "containerDefinitions": [
      {
        "name": "bot-image-task",
        "image": "123456789.dkr.ecr.us-east-1.amazonaws.com/bot-image-repo:latest",
        "cpu": 256,
        "memory": 512,
        "essential": true,
        "logConfiguration": {
          "logDriver": "awslogs",
          "options": {
                  "awslogs-create-group": "true",
                  "awslogs-group": "discord-bot-grp",
                  "awslogs-region": "us-east-1",
                  "awslogs-stream-prefix": "discord-bot-strm"
                  }
              }
      }
    ],
    "executionRoleArn": "arn:aws:iam::123456789:role/ecsTaskExecutionRole",
    "taskRoleArn": "arn:aws:iam::123456789:role/ecsTaskExecutionRole"
  }
  1. Head over to ECS Task Definition, in the top right hit Create new task definition but select the second option Create new task definition with JSON

  2. Paste the above JSON over the pre-populated code, and modify the example account IDs to your actual AWS Account ID. Additionally, save the modified output to your BitBucket repository’s task-definition.json as this is the same file we can later on modify use in the pipeline without having to update via console. It is fine there is no image in your ECR repository yet, the only issue that would prevent creation is the missing role we created in the IAM Access section.


ECS Cluster & Service


  1. Head over to the ECS service or click this link to start creating a new cluster.

  2. Give your cluster a name, for this guide we’ll be using “bot-ecs-cluster”

  3. “Networking”, the default VPC and Subnets will automatically be chosen. These are fine, if you have modified your subnets make sure to use one that has an Internet Gateway in the Route Table

  4. “Infrastructure”, leave this only as AWS Fargate selected and nothing else. We’ll be deploying the container “serverless” using Fargate

  5. Now hit Create, this will take a few minutes to deploy as a CloudFormation stack. Once the top banner states “Cluster bot-ecs-cluster has been created successfully.”, click on the cluster name to enter the details screen

  6. Under “Services”, hit the orange Create button to create a Service

  7. Change the “Computer Configuration -> Compute Options” to Launch type, FARGATE, platform version LATEST

ECS Compute Options

  1. “Deployment Configuration”, in this section the follow settings will be utilized: “Application Type” Service, specify the above created Task Definition bot-ecs-task with revision 1 (LATEST), assign the Service with the name “bot-ecs-service”, and for now set the “Desired Tasks” to 0 this way it does not try to deploy a null image constantly failing until we build our first image. The rest of the settings to the end can be default

Deployment Configuration

  1. Scroll down and create this Service, and that concludes setup for the AWS section

BitBucket Pipeline


We’re nearing the end of configuration which will be our YAML file for the BitBucket Pipeline. You can find the options and configurations for these Pipe Integrations here under “AWS ECR push image” and “AWS ECS deploy” for more information, and in case versions update from the ones listed in my example. Below you can find a simplified version which you can copy and paste it to your local bitbucket-pipelines.yml file, note to also reference the source repositories below from Atlassian for more configuration options.

# yaml-language-server: $schema=./bitbucket-pipelines.yml
pipelines:
  default:
  - step:
      oidc: true
      script:
      # build the image
      - docker build -t bot-image-repo .
      # push image to AWS ECR
      - pipe: atlassian/aws-ecr-push-image:2.2.0
        variables:
          AWS_OIDC_ROLE_ARN: 'arn:aws:iam::1234567890:role/bitbucket-oidc-role'
          AWS_DEFAULT_REGION: 'us-east-1'
          IMAGE_NAME: 'bot-image-repo'
  - step:
      oidc: true
      script:
      # update ECS Task Definition
      - pipe: atlassian/aws-ecs-deploy:1.10.0
        variables:
          AWS_OIDC_ROLE_ARN: 'arn:aws:iam::1234567890:role/bitbucket-oidc-role'
          AWS_DEFAULT_REGION: 'us-east-1'
          CLUSTER_NAME: 'bot-ecs-cluster'
          SERVICE_NAME: 'bot-ecs-service'
          #TASK_DEFINITION: 'task-definition.json' # uncomment to update task definitions
          FORCE_NEW_DEPLOYMENT: 'true'

Before moving on, ensure to enable the Pipeline by clicking on Repository settings, then scroll down to Pipelines -> Settings, and enabled the Pipeline here. Now when we push the updated files including bitbucket-pipelines.yml up, it will automatically deploy the pipeline


Deploy Your Discord Bot


With all the configuration complete, ensure your git clone repository for BitBucket has been updated with all sample code and files. Now all we need to do is send the three git commands to push the update and this sets off our automation

1
2
3
git add .
git commit -m "Initial Pipeline Deployment"
git push

Now we can monitor the Pipeline output for any errors. Head over to your BitBucket Repository and on the left sidebar hit Pipelines, you should see your commit message which you can click on to view the progress

Repository variables

If successful you should see output below, note this took under 2 minutes with my sample code meaning I can easily build ~20 or more Docker images with my build minutes under the Free plan

Repository variables

If the box is Red with an x, then observed the output on the right side to see what may have went wrong

Assuming you also received a success, now we can update the ECS Service to allow for 1 task to run our built Docker image, you can do this by modifying the ECS Service directly in your Management Console, or if you have the AWS CLI configured on your local machine (or spin up AWS CloudShell for an in-browser AWS CLI), run the following command to spin up a new container in your ECS cluster running Fargate

1
aws ecs update-service --cluster bot-ecs-cluster --service bot-ecs-service --desired-count 1 --region us-east-1

To tear down our container, we can also run the same command with a desired task count of 0 as to not incur costs for running the container when not required. Please keep this in mind and do not forget you have a resource running accumulated unnecessary costs if not being used!

1
aws ecs update-service --cluster bot-ecs-cluster --service bot-ecs-service --desired-count 0 --region us-east-1

We can also confirm with a CLI command (or manually checking) that our ECS Task has successful ran the container without failure

1
aws ecs describe-services --cluster bot-ecs-cluster --services bot-ecs-service --query "services[*].deployments[*].runningCount[]" --output text --region us-east-1

The final test and confirmation of function is to open up your Discord server in which you invited the Discord bot into where you should now see they are online. You can right click the name and choose “Manage Integrations” to see some details about the bot, and add or remove it to any text/voice channels you wish. If you used my example Python code above, go ahead and type /hello to have your bot say hello to you. If you have any issues, make sure to check out the “Logs” tab of your ECS Service that has deployed the task. Well done, you have properly configured your CI/CD Pipeline and can deploy any code changes via a git repo push using the three aforementioned commands


Advanced Deployment


NOTE: This is an advanced option to my guide on Automatically Deploying Your Discord Python Bot into a Docker Container Hosted on ECS AWS, I provide Terraform template you can add on to your environment or greenfield deploy for the first time.

Make sure to follow the Preparing Your Environment section first!


Terraform


Please make sure to configure your AWS credentials locally first, if you are new to Terraform as long as you can locally run AWS CLI commands, Terraform should be able to use your same credentials. Check out the HashiCorp AWS Configuration guide for more information. Additionally, I prefer to use an S3 Backend to manage my state file, this way it’s not localized to my machine.

Here is my GitHub Repo where you can find and download my free to use templates. These Terraform configurations are designed to streamline the deployment and quickly get your up and running.

Within this repository, you’ll find the following files:

  • main.tf: Establishes the AWS infrastructure and roles necessary for BitBucket OIDC.
  • outputs.tf: Provides the ECR Repo URL and the ARN for the BitBucket role, for use in your bitbucket-pipelines.yml.
  • terraform.tf: Initializes Terraform with the specified versions of the Terraform and AWS providers.
  • variables.tf: Centralized location for all customizable variables, including names and parameters.
  • modules/bitbucket/: Contains the modularized BitBucket OIDC Role.
  • modules/infra/: Contains all other AWS components such as ECR, ECS, and IAM resources, broken down into modules.

You could reformat this however you’d like, I have structured it for both clean deployments or integrations into existing repositories.

Once you apply this configuration, it will set the task count to 1 but you can set it to 0, otherwise it simply Fails as there is no image to launch. If are following sequentially, the next step is to build the BitBucket Pipeline configuration


Help


If you experience any issues following this guide, let me know by creating an issue over on my GitHub, which also hosts the sample code and advanced files provided here. Considering the scope of services covered, things may change or look different, or you may get unexpected errors, so GitHub is the perfect place to let me know of any changes needed or if you just need a bit of help following this guide.