Single Portal Setup

Overview

At the end of this section, you will have a running skyd instance running in the sia container that is ready to form contracts once you send it siacoins and a running mongo container.
This can be confirmed by running the docker ps command on your server and checking that it looks like the following:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3614009e7b7 skynet-webportal_sia "./run.sh" About a minute ago Up About a minute 9980/tcp sia
7cde488f0fd0 mongo:4.4.1 "docker-entrypoint.s…" 3 days ago Up 3 days 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp mongo
Sia container, I thought we were running skyd? You are right, we are running skyd. The Docker container is still called Sia for some backwards compatibility but we might fix this in the future.

Prerequisites

Ansible config.yml

The only prerequisite for this step is to define the version of the skyd, skynet-webportal, and skynet-accounts repos that you wish to install in your config.yml file.
Example
ansible-playbooks/my-vars/config.yml
---
# Set skynet-webportal, skyd, accounts versions by entering git branch, tag or
# commit
portal_repo_version: "deploy-2021-12-21"
portal_skyd_version: "deploy-2021-12-21"
portal_accounts_version: "deploy-2021-12-21"

Mongo Settings in Lastpass

When creating your cluster-prod.yml file, keep in mind that the prod suffix in the name is the id of your cluster, as defined by the portal_cluster_id=prod entry in your hosts.ini file, located in the ansible-private repository. Please see these docs for details.
In order for mongo to start, we need to set some fields in our cluster-prod.yml file.

Optional

The following fields can be set manually, or if they are left blank, ansible will use the defaults and autogenerate a strong password.
skynet_db_user: admin
skynet_db_pass: <strong password>
skynet_db_replicaset: skynet
If you choose to manually set the skynet_db_pass, make sure it is set to a strong password, like one generated from https://passwordsgenerator.net/.

Required

Next, we need to generate a keyfile for mongodb. Generate a mgkey by running the following command:
openssl rand -base64 756 > mgkey
sed -e 's/^/ /' mgkey > mgkey_yml
This will create two files, a mgkey and a mgkey_yml file. The mgkey_yml file is the same as the mgkey file but with two spaces prepended to each line. These spaces are for proper yml formatting.
Example:
$ cat mgkey
asdf
asdf
asdf
$ cat mgkey_yml
asdf
asdf
asdf
Open the mgkey file, and copy its contents into LastPass as a secure note titled mgkey. This secure note can be under the top level Shared-Ansible folder.
Open the mgkey_yml file, and copy its contents into the cluster-prod.yml file under mongo_db_mgkey.
mongo_db_mgkey: |
asdf
asdf
asdf

Enable Docker Services

There are several other docker services that the portal can run. To launch these docker services, simply update your PORTAL_MODULES variable in the <server>.yml file in LastPass. The options are documented here, but here is an example of the options:
# Possible choices:
# - 'a': Accounts (https://github.com/SkynetLabs/skynet-accounts)
# - 'b': Blocker (https://github.com/SkynetLabs/blocker)
# - 'g': GuNDB (https://github.com/SkynetLabs/gundb-relay)
# - 'j': Jaeger
# - 'm': MongoDB (when 'a' (i.e. Accounts) are used, this is included automatically)
# - 's': Malware Scanner (https://github.com/SkynetLabs/malware-scanner)
# - 'u': Abuse Scanner (https://github.com/SkynetLabs/abuse-scanner)
For most portals, it is recommended to run all the services but Jaegar. Jaegar is a debugging service, so doesn't need to be run unless you plan on debugging the portal code itself. So the PORTAL_MODULES should be updated to the following:
PORTAL_MODULES=abs

JWTs

In order to use accounts, the portal will need to issue and sign JWTs. For that to be possible, we need a JWKS defined on each server. For the moment, this is a manual operation. The instructions for generating such a file are here: https://github.com/SkynetLabs/skynet-accounts#generating-a-jwks-and-cookie-keys.
Once you generate the content, you will have jwks.json file. In LastPass create secure note named cluster-prod-jwks.json (resp. cluster-{{ portal_cluster_id }}-jwks.json) under portal-cluster-configs (next to cluster-prod.yml) and copy content of generated jwks.json to the secure note.

Portal Setup Following

Once your config.yml file is ready, it is time to run the portals-setup-following.sh script and don't forget to add the config.yml as a parameter.
Example
./scripts/portals-setup-following.sh -e @my-vars/config.yml --limit <server name> executed from the ansible-playbooks directory.

MongoDB Replicaset Initialization

NOTE: There is currently a bug in the ansible playbook that affects the initial server for a portal and the mongo replicaset will not be initiated properly. Once fix this step will be removed from the documentation.
The mongo container will start up without error, but the replicaset will not be initialized. You can confirm this error by checking the mongo container logs for initialization errors.
docker logs mongo
To fix the issue, and initialize mongo, run the following commands:
# Navigate to webportal directory and get into the mongo container
cd ~/skynet-webportal
. .env && docker exec -it mongo mongo -u admin -p $SKYNET_DB_PASS
# Initiate the replicaset
# <serverdomain> should be the domain of the server. So mydomain.com for a single
# server, or sev1.mydomain.com for a multi-server portal.
rs.initiate(
{
_id : "skynet",
members: [
{ _id : 0, host : "<serverdomain>:27017" },
]
}
)
Now that mongo is properly initialized, re-run the portals-setup-following.sh ansible-playbook.

Bootstrapping Consensus

For those familiar with running a skyd node, you will note that syncing consensus can take several hours. When the portals-setup-following.sh script finishes, the node is running, you can either let it run and check back later when the consensus is synced, or you can copy over a bootstrapped consensus.db. If this is your first portal, you should probably just let your node sync on its own. However, for all other portals, you can take a consensus.db from a previous node and copy it to your new server. Once you bootstrapped consensus.db is copied to your server you can copy it into the docker data directory with the following command:
# First make sure the container is stopped
docker stop sia
# Copy consensus.db into the docker data directory
docker cp /path/to/consensus.db sia:/sia-data/consensus/consensus.db
Now that you've copied your bootstrapped consensus.db is in the docker data directory you can re-run the portal-setup-following.sh script.

Fund The Portal

You now have a running skyd node that is ready to start forming contracts. To fund the node, generate a wallet address.
docker exec sia siac wallet address
Now you can send siacoins to this address to fund your node. It is recommended to fund the wallet with 3x your allowance funds.