3. Distributed deployment example

Deploying FRED on multiple servers brings at least two advantages:

  • increased performance,

  • access control on the network level.

Deploying on multiple physical servers is not the only distributed solution, deploying on virtual servers or separating tasks on the process level is also possible.

Nodes overview

Nodes in this document represent execution environments.

We work with the following nodes:

  • EPP node – EPP service

  • ADMIN node – web admin service

  • WEB node – public web services: Unix WHOIS, Web WHOIS, RDAP, Domain Browser [1]

  • HM node – zone management

  • APP node – application servers, CLI admin tools, pgbouncer, CORBA naming service

  • DB node – the main FRED database

  • LOGDB node – the logger database

  • MOJEID node [1] – MojeID service and database

Note

Hardware parameters background

The hardware parameters described further are minimum requirements for a Registry of about 1 million domains and with this approximate traffic:

  • EPP:
    • write operations: ~ 3,5 million / month (they increase the size of both databases)

    • read-only operations: ~ 30 million / month (they increase the size of logdb)

  • WHOIS: 15 million operations / month (they increase the size of logdb)

WHOIS includes both Unix and Web variant, and RDAP, too.

Tip

Redundancy

This text does not describe redundancy options in detail, but here is a quick tip:

  • database replication is a standard technique to protect data,

  • the whole system can be replicated in several instances on different localities, which can substitute one another when one instance fails or during a system upgrade.

3.1. Network

Network rules are described per node in the following sections, but here is an overview of logical connections in the network (a single instance of the system).

Network – Logical topology

Restricted network access means that servers should be accessed only from IP addresses allowed on a firewall.

Unrestricted network access means that servers can be accessed from any IP address, but only necessary ports should be open for access as illustrated in the network rules for each node.

The port numbers mentioned in the network rules are settings resulting from the default installation.

3.2. EPP node

Services: EPP service

Packages:

  • libapache2-mod-corba

  • libapache2-mod-eppd

Hardware parameters (see the background):

  • CPU: @2.0 GHz, 10 cores

  • Memory: 16 GB–32 GB

  • Storage: 200 GB

Network:

  • access to EPP (tcp, port 700) permitted only from particular IP addresses (or ranges) declared by registrars

Network rules for CORBA clients on the EPP node

Service

Description

Server

Protocol /Port

Service

Description

apache2
mod-eppd

registrar interface/epp service

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2224

fred-rifd

FRED registrar interface daemon

corba

tcp/2226

fred-logd

FRED logging daemon

3.3. ADMIN node

Services: WebAdmin service

Packages:

  • fred-common

  • fred-idl

  • fred-pyfco

  • fred-pylogger

  • fred-webadmin

Hardware parameters (see the background):

  • CPU: @2.0 GHz, 10 cores

  • Memory: 16 GB–32 GB

  • Storage: 200 GB

Network:

  • access to HTTPS (tcp, port 443) permitted only from the private network of the Registry

Network rules for CORBA clients on the ADMIN node

Service

Description

Server

Protocol /Port

Service

Description

webadmin/daphne

web based registry administration

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2222

fred-adifd

FRED administration interface daemon

corba

tcp/2228

fred-msgd

FRED messaging daemon

corba

tcp/2234

fred-rsifd

FRED registry record statement daemon

corba

tcp/2226

fred-logd

FRED logging daemon

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

corba

tcp/2232

fred-pyfred@filemanager

FRED pyfred service – filemanager module

3.4. WEB node

Services: Unix WHOIS, Web WHOIS, RDAP

Packages:

  • fred-idl

  • fred-pyfco

  • fred-pylogger

  • fred-rdap

  • fred-webwhois

  • libapache2-mod-corba

  • libapache2-mod-whoisd

Hardware parameters (see the background):

  • CPU: @2.0 GHz, 10 cores

  • Memory: 16 GB–32 GB

  • Storage: 200 GB

Network:

  • access to HTTPS (tcp, port 443) permitted from anyone

  • access to WHOIS (tcp, port 43) permitted from anyone

Network rules for CORBA clients on the WEB node

Service

Description

Server

Protocol /Port

Service

Description

apache2
mod-whoisd

unix whois service (rfc 3912)

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2223

fred-pifd

FRED public interface daemon

corba

tcp/2226

fred-logd

FRED logging daemon

nginx

web whois service

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2223

fred-pifd

FRED public interface daemon

corba

tcp/2234

fred-rsifd

FRED registry record statement daemon

corba

tcp/2226

fred-logd

FRED logging daemon

nginx

rdap service

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2223

fred-pifd

FRED public interface daemon

corba

tcp/2226

fred-logd

FRED logging daemon

3.5. HM node

Hidden master for the DNS infrastructure.

Services: zone file generation, zone signing, notifying DNS servers

Packages:

  • fred-idl

  • pyfred-genzone

  • python-pyfred

Hardware parameters (see the background):

  • CPU: @2.0 GHz, 10 cores

  • Memory: 16 GB–32 GB

  • Storage: 200 GB

Network:

  • access to IXFR (tcp, port 53) permitted only from DNS servers

Network rules for CORBA clients on the HM node

Service

Description

Server

Protocol /Port

Service

Description

genzone-client

zone file generator

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2231

fred-pyfred@genzone

FRED pyfred service – genzone module

3.6. APP node

Services:

  • CORBA naming service (omninames) as a virtual server “corba”,

  • backend application servers,

  • CLI administration tools,

  • pgbouncer – prepares and recycles database connections to decrease overhead costs

Packages:

  • cdnskey-scanner

  • fred-akm

  • fred-common

  • fred-doc2pdf

  • fred-idl

  • fred-logger-maintenance

  • fred-server: fred-adifd, fred-akmd, fred-logd, fred-pifd, fred-rifd, fred-rsifd

  • fred-transproc

  • python-pyfred, fred-pyfred, pyfred-filemanager

Hardware parameters (see the background):

  • CPU: @2.0 GHz, 10 cores

  • Memory: 16 GB–32 GB

  • Storage: 400 GB

    Note

    Consider that the storage will contain files managed by the FRED File Manager.

Network:

  • only internal access from the private network of the Registry

Network rules for CORBA clients on the APP node

Service

Description

Server

Protocol /Port

Service

Description

fred-akm

fred-akm

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2233

fred-akmd

FRED AKM interface daemon

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service - mailer module

fred-admin

fred-admin

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2224

fred-rifd

FRED registrar interface daemon

corba

tcp/2232

fred-pyfred@filemanager

FRED pyfred service – filemanager module

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

Network rules for CORBA servers on the APP node

Service

Description

Server

Protocol /Port

Service

Description

fred-logd

FRED logging daemon

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

fred-rifd

FRED registrar interface daemon

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

corba

tcp/2229

fred-pyfred@techcheck

FRED pyfred service – techcheck module

fred-akmd

FRED AKM interface daemon

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2226

fred-logd

FRED logging daemon daemon

fred-adifd

FRED administration interface daemon

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2226

fred-logd

FRED logging daemon daemon

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

fred-msgd

FRED messaging daemon

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2232

fred-pyfred@filemanager

FRED pyfred service – filemanager module

fred-pifd

FRED public interface daemon

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2226

fred-logd

FRED logging daemon daemon

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

corba

tcp/2232

fred-pyfred@filemanager

FRED pyfred service – filemanager module

fred-rsifd

FRED

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

corba

tcp/2232

fred-pyfred@filemanager

FRED pyfred service – filemanager module

fred-pyfred@genzone

FRED pyfred service – genzone module

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

fred-pyfred@mailer

FRED pyfred service – mailer module

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2232

fred-pyfred@filemanager

FRED pyfred service – filemanager module

fred-pyfred@filemanager

FRED pyfred service – filemanager module

localhost

tcp/5432

pgbouncer):

connection pooler for PostgreSQL

fred-pyfred@techcheck

FRED pyfred service – techcheck module

localhost

tcp/5432

pgbouncer

connection pooler for PostgreSQL

corba

tcp/2809

omninames

OmniORB Interoperable Naming Service

corba

tcp/2225

fred-pyfred@mailer

FRED pyfred service – mailer module

3.7. Database nodes

Database is separated into two nodes:

  • DB – the main database freddb – data of all domains, contacts, registrars, history etc.

  • LOGDB – the audit log (logger) database logdb – logging of all user transactions

We have the logger database separately due to high workload.

Packages:

  • fred-db

Hardware parameters (see the background) – DB:

  • CPU: 2x @2.0 GHz, at least 10 cores per CPU

  • Memory: 32 GB–64 GB

    Note

    Consider that, ideally, this whole database should fit into the memory, which is possible only till a certain number of objects though. See also Storage considerations.

  • Storage: 400 GB

    Note

    Consider:

    • storage size can be even smaller depending on the size of the database, which depends on the number of objects in the db and registrars’ behaviour (growth of object history),

    • the size of the database after 5-year operation of a registry of 1 million domains can be about 30 GB,

    • extra space for garbage accumulation (before vacuuming), temporary dumps during migrations, and other db maintenance costs.

Hardware parameters (see the background) – LOGDB:

  • CPU: 2x @2.0 GHz, at least 10 cores per CPU

  • Memory: 32 GB–64 GB

    Note

    Consider this the lowest requirement. This amount of memory might be filled quite soon.

  • Storage

    Note

    Consider:

    • how many months of logs are necessary to be kept in the database (the last year? the two last years?) and how much logs can be kept in backups,

    • growth rate of the log records (according to the traffic estimation as described above): EPP ~ 135 GB / month, WHOIS ~ 30 GB / month.

Network:

  • accessed only by the backend server(s) from the APP node

3.8. Component deployment diagram

Diagram of FRED components deployed on multiple nodes