Portal Setup
At the end of this section, you will have a running
skyd
instance running in the sia
container that is ready to form contracts once you send it siacoins and a running mongo
container.This can be confirmed by running the
docker ps
command on your server and checking that it looks like the following:$ ssh [email protected]
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3614009e7b7 skynet-webportal_sia "./run.sh" About a minute ago Up About a minute 9980/tcp sia
7cde488f0fd0 mongo:4.4.1 "docker-entrypoint.s…" 3 days ago Up 3 days 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp mongo
Sia container, I thought we were running skyd? You are right, we are running skyd. The Docker container is still called Sia for some backwards compatibility but we might fix this in the future.
NOTE your
docker ps
output will look different based on the portal_modules
you define.The first thing we will want to define is the Skynet Webportal version that we want to run on our portal. We define this in our config file in
ansible-playbooks.
To initialize your config file, make a copy of
ansible-playbooks/my-vars/config-sample-do-not-edit.yml
and name it config.yml
.In your
config.yml
file, define the portal_repo_version
with the version of the Skynet Webportal that you would like deployed to your server.Example
ansible-playbooks/my-vars/config.yml
---
# Set skynet-webportal version by entering git branch, tag or commit
portal_repo_version: "v1.0.0"
NOTE these docs have been tested with the
v1.0.0
tag. It is recommended to use that version to avoid potential issues. There are several other docker services that the portal can run. To launch these docker services, simply update your
portal_modules
variable in the hosts.ini
file. The options are documented here, but here is an example of the options:# Possible choices:
# - 'a': Accounts (https://github.com/SkynetLabs/skynet-accounts)
# - 'b': Blocker (https://github.com/SkynetLabs/blocker)
# - 'p': Pinner (https://github.com/SkynetLabs/pinner)
# - 'j': Jaeger
# - 'm': MongoDB (when 'a' (i.e. Accounts) are used, this is included automatically)
# - 's': Malware Scanner (https://github.com/SkynetLabs/malware-scanner)
# - 'u': Abuse Scanner (https://github.com/SkynetLabs/abuse-scanner)
For most portals, it is recommended to run all the services but Jaeger. Jaeger is a debugging service, so doesn't need to be run unless you plan on debugging the portal code itself. For single servers, the
pinner
is not needed, as it is designed for increasing file availability for multi-server portals. So the portal_modules
should be updated to the following:# Single Server
portal_modules=abs
# Multi Server
portal_modules=absp
This is going to be set in your secrets manager. An example of what it might look like for a gmail account using the plaintext secrets manager is the following:
# portal_cluster_configs/cluster-prod.yml
accounts_email_uri: smtps://[email protected]:[email protected]:465/?skip_ssl_verify=false
If you are using AWS, you will need your
AWSAccessKeyID
which is the aws_access_key
and your AWSSecretKey
which is your aws_secret_access_key
. You will need to put this in your /portal-cluster-configs/cluster-prod.yml
file in your secrets manager.# Plaintext File Example
# ansible-private/plaintext-secrets/portal-cluster-config/cluster-prod.yml
aws_secret_access_key: mysecretaccesskey
aws_access_key: myaccesskeyid
In order to make certbot work with Cloudflare, you will need to set your
docker_image_overrides
variable to override the AWS image that is used by default. You can define docker_image_overrides
in your ansible-private/custom-vars/prod.yml
file. docker_image_overrides:
- {
service: "certbot",
image: "certbot/dns-cloudflare",
environment: ["CERTBOT_ARGS=--dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini"],
}
Next, you will need to set your Cloudflare access token in your
cluster-prod.yml
file like so:# Plaintext File Example
# ansible-private/plaintext-secrets/portal-cluster-config/cluster-prod.yml
dns_cloudflare_api_token: mycloudflareaccesstoken
If you are using another DNS provider that has a certbot plugin, you will need to follow the directions here for using certbot.
You now should have all your required settings defined. There are a handful of optional settings, you can check them out here. These include setting up Discord notifications and Stripe payments for paid accounts.
Once your
config.yml
file is ready, it is time to run the portals-setup-following.sh script and don't forget to add the config.yml
as a parameter.NOTE you will need to run
portals-setup-following
twice. The first time should stop at waiting for the sia consensus to finish syncing.Example
# Executed from /ansible-playbooks/
./scripts/portals-setup-following.sh -e @my-vars/config.yml --limit <server name>
NOTE ansible will prompt you if any of your secrets change. Since it is going to autogenerate a number of new secrets during set up, it will prompt you to confirm that it is ok to save these. On the first run, this is expected, and it is safe to say yes.
On future runs, if it continues to prompt you that secrets are changing, this most likely indicates a problem and you should reach out on the discord for help.
For those familiar with running a
skyd
node, you will note that syncing consensus can take several hours. When the portals-setup-following.sh
script finishes, the node is running, you can either let it run and check back later when the consensus is synced, or you can copy over a bootstrapped consensus.db
. If this is your first portal, you should probably just let your node sync on its own. However, for all other portals, you can take a consensus.db
from a previous node and copy it to your new server. Once you bootstrapped consensus.db
is copied to your server you can copy it into the docker data directory with the following command:# First make sure the container is stopped
docker stop sia
# Copy consensus.db into the docker data directory
docker cp /path/to/consensus.db sia:/sia-data/consensus/consensus.db
Now that you've copied your bootstrapped
consensus.db
is in the docker data directory you can re-run the portal-setup-following.sh script. You now have a running
skyd
node that is ready to start forming contracts. To fund the node, generate a wallet address.docker exec sia siac wallet address
Now you can send siacoins to this address to fund your node. It is recommended to fund the wallet with 3x your allowance funds.
NOTE you can find the wallet seed in your secret storage under
portal-server-configs/<sever name>.yml
It is time to run
portals-setup-following
one more time, now that the following has been completed:- Consensus is synced
- Portal has been funded with SC
If you want the portal to be fully deployed after this run of setup-following, then you can add this variable to your
ansible-playbooks/my-vars/config.yml
file:deploy_after_setup: True
Last modified 1yr ago